-
Notifications
You must be signed in to change notification settings - Fork 29
Virtual Datasets #8708
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Virtual Datasets #8708
Conversation
📝 WalkthroughWalkthroughThis change implements virtual datasets by standardizing dataset identification to use a single dataset ID (ObjectId) across backend, frontend, and API boundaries. It introduces support for virtual datasets, refactors controller and service method signatures, updates database schema and migrations, adds new endpoints, and removes legacy or redundant code. Documentation and type definitions are updated accordingly. Changes
Estimated code review effort5 (~180 minutes) Possibly related PRs
Suggested labels
Suggested reviewers
Poem
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
f7de002
to
7dcad67
Compare
7dcad67
to
42101a9
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🔭 Outside diff range comments (1)
webknossos-datastore/app/com/scalableminds/webknossos/datastore/controllers/ZarrStreamingController.scala (1)
130-179
: Remove unused methodThe
convertLayerToZarrLayer
method appears to be dead code since this logic has been moved toZarrStreamingService
. This method is not called anywhere in the controller.- private def convertLayerToZarrLayer(layer: DataLayer, zarrVersion: Int): ZarrLayer = { - val dataFormat = if (zarrVersion == 2) DataFormat.zarr else DataFormat.zarr3 - layer match { - case s: SegmentationLayer => - val rank = s.additionalAxes.map(_.length).getOrElse(0) + 4 // We're writing c, additionalAxes, xyz - ZarrSegmentationLayer( - s.name, - s.boundingBox, - s.elementClass, - mags = s.sortedMags.map( - m => - MagLocator(m, - Some(s"./${s.name}/${m.toMagLiteral(allowScalar = true)}"), - None, - Some(AxisOrder.cAdditionalxyz(rank)), - None, - None)), - mappings = s.mappings, - largestSegmentId = s.largestSegmentId, - numChannels = Some(if (s.elementClass == ElementClass.uint24) 3 else 1), - defaultViewConfiguration = s.defaultViewConfiguration, - adminViewConfiguration = s.adminViewConfiguration, - coordinateTransformations = s.coordinateTransformations, - additionalAxes = s.additionalAxes.map(reorderAdditionalAxes), - dataFormat = dataFormat - ) - case d: DataLayer => - val rank = d.additionalAxes.map(_.length).getOrElse(0) + 4 // We're writing c, additionalAxes, xyz - ZarrDataLayer( - d.name, - d.category, - d.boundingBox, - d.elementClass, - mags = d.sortedMags.map( - m => - MagLocator(m, - Some(s"./${d.name}/${m.toMagLiteral(allowScalar = true)}"), - None, - Some(AxisOrder.cAdditionalxyz(rank)), - None, - None)), - numChannels = Some(if (d.elementClass == ElementClass.uint24) 3 else 1), - defaultViewConfiguration = d.defaultViewConfiguration, - adminViewConfiguration = d.adminViewConfiguration, - coordinateTransformations = d.coordinateTransformations, - additionalAxes = d.additionalAxes.map(reorderAdditionalAxes), - dataFormat = dataFormat - ) - } - }
🧹 Nitpick comments (6)
docs/data/concepts.md (1)
125-127
: Minor formatting: Consider using consistent list markers.The static analysis tools flag inconsistent list marker usage. Consider using dashes consistently:
- + `dataLayers.mags`: Holds information about the available magnification steps of the layer. - * `dataLayers.mags.mag`: A 3-tuple (e.g., `[1, 1, 1]`, `[2, 2, 1]`) for uniform or non-uniform magnifications. - * `dataLayers.mags.path`: The path to the directory containing the data for this magnification step. + - `dataLayers.mags`: Holds information about the available magnification steps of the layer. + - `dataLayers.mags.mag`: A 3-tuple (e.g., `[1, 1, 1]`, `[2, 2, 1]`) for uniform or non-uniform magnifications. + - `dataLayers.mags.path`: The path to the directory containing the data for this magnification step.app/models/dataset/ComposeService.scala (1)
73-73
: Consider supporting additional layer types.Currently only
DataLayerWithMagLocators
is supported. Consider documenting why other layer types are excluded or plan for future support.- case _ => Fox.failure("Unsupported layer type for composition: " + layer.getClass.getSimpleName) + case _ => Fox.failure(s"Unsupported layer type for composition: ${layer.getClass.getSimpleName}. Only DataLayerWithMagLocators is currently supported.")app/models/dataset/DatasetService.scala (1)
291-295
: Assess performance of getVirtualDatasetIds()The DAO method loads all virtual dataset IDs into memory in one go. If you have a large number of virtual datasets, this query and the in-memory list could become a bottleneck. Consider whether you need to:
- Stream results instead of calling
.toList
(e.g., Slick streaming or an FS2/Alpakka approach).- Add pagination or a row-limit to the query.
- Cache the ID list if it changes infrequently.
Attention needed at:
- app/models/dataset/Dataset.scala: lines 522–529 (definition of
getVirtualDatasetIds
)- app/models/dataset/DatasetService.scala: line 292 (where it’s invoked)
frontend/javascripts/admin/rest_api.ts (2)
1107-1112
: Consider removing legacy formattingThe comment indicates this is formatting data for the "old format" that the backend can parse, and mentions removing the datasetId. This suggests a transitional implementation. Consider creating a follow-up task to fully migrate the backend to the new format.
1365-1370
: Redundant parameters in function callThe function passes both
dataset
anddataset.id
totriggerDatasetClearCache
. Since the dataset object contains the ID, consider refactoring to pass only the necessary parameter.- triggerDatasetClearCache(dataset.dataStore.url, dataset, dataset.id, layerName), + triggerDatasetClearCache(dataset.dataStore.url, dataset.id, layerName),webknossos-datastore/conf/datastore.latest.routes (1)
110-123
: Mixed parameter patterns for backward compatibilitySome routes still use
organizationId
anddatasetName
parameters while others usedatasetId
. This mixed approach appears intentional for:
- Organization-level operations (measureUsedStorage)
- Dataset creation (add)
- Legacy support (reload trigger)
Consider documenting which routes are legacy vs. new pattern for maintainability.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
test/db/dataSets.csv
is excluded by!**/*.csv
📒 Files selected for processing (59)
app/controllers/AnnotationIOController.scala
(2 hunks)app/controllers/DatasetController.scala
(3 hunks)app/controllers/UserTokenController.scala
(1 hunks)app/controllers/WKRemoteDataStoreController.scala
(3 hunks)app/controllers/WKRemoteTracingStoreController.scala
(2 hunks)app/models/annotation/AnnotationService.scala
(4 hunks)app/models/dataset/ComposeService.scala
(1 hunks)app/models/dataset/Dataset.scala
(7 hunks)app/models/dataset/DatasetService.scala
(10 hunks)app/models/dataset/WKRemoteDataStoreClient.scala
(5 hunks)app/models/dataset/explore/WKExploreRemoteLayerService.scala
(1 hunks)conf/evolutions/136-virtual-datasets.sql
(1 hunks)conf/evolutions/reversions/136-virtual-datasets.sql
(1 hunks)conf/webknossos.latest.routes
(3 hunks)conf/webknossos.versioned.routes
(2 hunks)docs/data/concepts.md
(6 hunks)frontend/javascripts/admin/api/mesh.ts
(4 hunks)frontend/javascripts/admin/dataset/composition_wizard/04_configure_new_dataset.tsx
(1 hunks)frontend/javascripts/admin/rest_api.ts
(20 hunks)frontend/javascripts/dashboard/dataset/dataset_settings_data_tab.tsx
(0 hunks)frontend/javascripts/dashboard/dataset/dataset_settings_delete_tab.tsx
(1 hunks)frontend/javascripts/dashboard/dataset/dataset_settings_view.tsx
(2 hunks)frontend/javascripts/dashboard/dataset/dataset_settings_viewconfig_tab.tsx
(3 hunks)frontend/javascripts/test/backend-snapshot-tests/datasets.e2e.ts
(3 hunks)frontend/javascripts/test/model/binary/layers/wkstore_adapter.spec.ts
(5 hunks)frontend/javascripts/types/schemas/datasource.schema.ts
(2 hunks)frontend/javascripts/types/schemas/datasource.types.ts
(1 hunks)frontend/javascripts/viewer/model/bucket_data_handling/wkstore_adapter.ts
(1 hunks)frontend/javascripts/viewer/model/sagas/load_histogram_data_saga.ts
(1 hunks)frontend/javascripts/viewer/model/sagas/meshes/precomputed_mesh_saga.ts
(2 hunks)frontend/javascripts/viewer/view/right-border-tabs/connectome_tab/connectome_view.tsx
(2 hunks)frontend/javascripts/viewer/view/right-border-tabs/segments_tab/segments_view_helper.tsx
(1 hunks)tools/postgres/schema.sql
(2 hunks)unreleased_changes/8708.md
(1 hunks)webknossos-datastore/app/com/scalableminds/webknossos/datastore/DataStoreModule.scala
(1 hunks)webknossos-datastore/app/com/scalableminds/webknossos/datastore/controllers/BinaryDataController.scala
(10 hunks)webknossos-datastore/app/com/scalableminds/webknossos/datastore/controllers/DSMeshController.scala
(6 hunks)webknossos-datastore/app/com/scalableminds/webknossos/datastore/controllers/DataSourceController.scala
(18 hunks)webknossos-datastore/app/com/scalableminds/webknossos/datastore/controllers/LegacyController.scala
(1 hunks)webknossos-datastore/app/com/scalableminds/webknossos/datastore/controllers/WKDatasetController.scala
(0 hunks)webknossos-datastore/app/com/scalableminds/webknossos/datastore/controllers/ZarrStreamingController.scala
(12 hunks)webknossos-datastore/app/com/scalableminds/webknossos/datastore/dataformats/layers/WKWDataLayers.scala
(5 hunks)webknossos-datastore/app/com/scalableminds/webknossos/datastore/helpers/DatasetDeleter.scala
(4 hunks)webknossos-datastore/app/com/scalableminds/webknossos/datastore/models/datasource/DataLayer.scala
(2 hunks)webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/AccessTokenService.scala
(1 hunks)webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/DSRemoteWebknossosClient.scala
(4 hunks)webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/DataSourceService.scala
(1 hunks)webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/DatasetCache.scala
(1 hunks)webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/ZarrStreamingService.scala
(1 hunks)webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/mesh/DSFullMeshService.scala
(3 hunks)webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/uploading/ComposeService.scala
(0 hunks)webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/uploading/UploadService.scala
(2 hunks)webknossos-datastore/conf/datastore.latest.routes
(4 hunks)webknossos-datastore/conf/datastore.versioned.routes
(1 hunks)webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/TSRemoteDatastoreClient.scala
(2 hunks)webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/TSRemoteWebknossosClient.scala
(3 hunks)webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/tracings/RemoteFallbackLayer.scala
(2 hunks)webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/tracings/volume/VolumeTracingService.scala
(1 hunks)webknossos-tracingstore/conf/tracingstore.versioned.routes
(1 hunks)
💤 Files with no reviewable changes (3)
- frontend/javascripts/dashboard/dataset/dataset_settings_data_tab.tsx
- webknossos-datastore/app/com/scalableminds/webknossos/datastore/controllers/WKDatasetController.scala
- webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/uploading/ComposeService.scala
🧰 Additional context used
🧠 Learnings (49)
frontend/javascripts/viewer/model/sagas/load_histogram_data_saga.ts (1)
Learnt from: dieknolle3333
PR: #8168
File: frontend/javascripts/oxalis/model/sagas/proofread_saga.ts:1039-1039
Timestamp: 2024-11-22T17:18:04.217Z
Learning: In frontend/javascripts/oxalis/model/sagas/proofread_saga.ts
, when calling getMagInfo
, the use of volumeTracingLayer.resolutions
is intentional and should not be changed to volumeTracingLayer.mags
.
frontend/javascripts/test/model/binary/layers/wkstore_adapter.spec.ts (1)
Learnt from: frcroth
PR: #8609
File: app/models/dataset/Dataset.scala:753-775
Timestamp: 2025-05-12T13:07:29.637Z
Learning: In the updateMags
method of DatasetMagsDAO (Scala), the code handles different dataset types distinctly:
- Non-WKW datasets have
magsOpt
populated and use the first branch which includes axisOrder, channelIndex, and credentialId. - WKW datasets will have
wkwResolutionsOpt
populated and use the second branch which includes cubeLength. - The final branch is a fallback for legacy data.
This ensures appropriate fields are populated for each dataset type.
webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/DatasetCache.scala (1)
Learnt from: frcroth
PR: #8609
File: app/models/dataset/Dataset.scala:753-775
Timestamp: 2025-05-12T13:07:29.637Z
Learning: In the updateMags
method of DatasetMagsDAO (Scala), the code handles different dataset types distinctly:
- Non-WKW datasets have
magsOpt
populated and use the first branch which includes axisOrder, channelIndex, and credentialId. - WKW datasets will have
wkwResolutionsOpt
populated and use the second branch which includes cubeLength. - The final branch is a fallback for legacy data.
This ensures appropriate fields are populated for each dataset type.
frontend/javascripts/dashboard/dataset/dataset_settings_delete_tab.tsx (1)
Learnt from: dieknolle3333
PR: #8229
File: frontend/javascripts/oxalis/model/accessors/dataset_accessor.ts:348-354
Timestamp: 2024-11-25T14:38:49.345Z
Learning: For the getDatasetExtentAsProduct
function in dataset_accessor.ts
, input validation for negative or zero dimensions is not necessary.
frontend/javascripts/viewer/view/right-border-tabs/segments_tab/segments_view_helper.tsx (4)
Learnt from: dieknolle3333
PR: #8168
File: frontend/javascripts/oxalis/model/sagas/proofread_saga.ts:1039-1039
Timestamp: 2024-11-22T17:18:04.217Z
Learning: In frontend/javascripts/oxalis/model/sagas/proofread_saga.ts
, when calling getMagInfo
, the use of volumeTracingLayer.resolutions
is intentional and should not be changed to volumeTracingLayer.mags
.
Learnt from: dieknolle3333
PR: #8229
File: frontend/javascripts/oxalis/model/accessors/dataset_accessor.ts:348-354
Timestamp: 2024-11-25T14:38:49.345Z
Learning: For the getDatasetExtentAsProduct
function in dataset_accessor.ts
, input validation for negative or zero dimensions is not necessary.
Learnt from: philippotto
PR: #8602
File: frontend/javascripts/oxalis/model/volumetracing/volume_annotation_sampling.ts:365-366
Timestamp: 2025-05-07T06:17:32.810Z
Learning: The parameter in applyVoxelMap was renamed from sliceCount
to sliceOffset
to better reflect its purpose, but this doesn't affect existing call sites since JavaScript/TypeScript function calls are position-based.
Learnt from: MichaelBuessemeyer
PR: #8485
File: frontend/javascripts/oxalis/model/accessors/dataset_layer_transformation_accessor.ts:384-392
Timestamp: 2025-04-01T09:45:17.527Z
Learning: The function isRotationAndMirrorMaybeOnly
in the dataset_layer_transformation_accessor.ts is intentionally designed to allow mirroring transformations (negative scale values). It uses the length comparison (scale.length() === NON_SCALED_VECTOR.length()
) rather than component equality to permit mirrored axes while ensuring the overall scale magnitude remains the same.
webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/uploading/UploadService.scala (1)
Learnt from: frcroth
PR: #8202
File: webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/DatasetErrorLoggingService.scala:68-72
Timestamp: 2024-11-25T10:02:03.702Z
Learning: In DatasetErrorLoggingService.scala
, prefer using TextUtils.stackTraceAsString(exception)
when logging exceptions instead of passing the exception directly to logger.error
.
frontend/javascripts/admin/dataset/composition_wizard/04_configure_new_dataset.tsx (1)
Learnt from: dieknolle3333
PR: #8229
File: frontend/javascripts/oxalis/model/accessors/dataset_accessor.ts:348-354
Timestamp: 2024-11-25T14:38:49.345Z
Learning: For the getDatasetExtentAsProduct
function in dataset_accessor.ts
, input validation for negative or zero dimensions is not necessary.
frontend/javascripts/viewer/view/right-border-tabs/connectome_tab/connectome_view.tsx (1)
Learnt from: philippotto
PR: #8542
File: frontend/javascripts/viewer/model/sagas/volumetracing_saga.tsx:0-0
Timestamp: 2025-05-30T12:36:09.930Z
Learning: The WebKnossos frontend follows Redux immutability principles where reducers act immutably and create new state objects when changes occur. This makes reference equality checks (like prevVolumeTracing.segments !== volumeTracing.segments
) reliable and efficient for detecting actual state changes, allowing memoization to work correctly.
frontend/javascripts/viewer/model/sagas/meshes/precomputed_mesh_saga.ts (5)
Learnt from: dieknolle3333
PR: #8168
File: frontend/javascripts/oxalis/model/sagas/proofread_saga.ts:1039-1039
Timestamp: 2024-11-22T17:18:04.217Z
Learning: In frontend/javascripts/oxalis/model/sagas/proofread_saga.ts
, when calling getMagInfo
, the use of volumeTracingLayer.resolutions
is intentional and should not be changed to volumeTracingLayer.mags
.
Learnt from: philippotto
PR: #8602
File: frontend/javascripts/oxalis/model/volumetracing/volume_annotation_sampling.ts:365-366
Timestamp: 2025-05-07T06:17:32.810Z
Learning: The parameter in applyVoxelMap was renamed from sliceCount
to sliceOffset
to better reflect its purpose, but this doesn't affect existing call sites since JavaScript/TypeScript function calls are position-based.
Learnt from: dieknolle3333
PR: #8229
File: frontend/javascripts/oxalis/model/accessors/dataset_accessor.ts:348-354
Timestamp: 2024-11-25T14:38:49.345Z
Learning: For the getDatasetExtentAsProduct
function in dataset_accessor.ts
, input validation for negative or zero dimensions is not necessary.
Learnt from: dieknolle3333
PR: #8168
File: frontend/javascripts/oxalis/geometries/materials/plane_material_factory.ts:568-585
Timestamp: 2024-11-22T17:18:43.411Z
Learning: In the file frontend/javascripts/oxalis/geometries/materials/plane_material_factory.ts
, the uniform name allResolutions
should remain unchanged to maintain consistency with the shader code.
Learnt from: frcroth
PR: #8236
File: webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/mesh/NeuroglancerMeshHelper.scala:143-152
Timestamp: 2025-04-30T08:54:11.252Z
Learning: In Neuroglancer's sharded format, mesh fragment data is located immediately before the manifest file in the same shard data file. The correct way to calculate the mesh data starting offset is to subtract the total mesh size from the manifest starting offset (neuroglancerOffsetStart - totalMeshSize), not by adding the manifest size.
frontend/javascripts/dashboard/dataset/dataset_settings_view.tsx (1)
Learnt from: frcroth
PR: #8609
File: app/models/dataset/Dataset.scala:753-775
Timestamp: 2025-05-12T13:07:29.637Z
Learning: In the updateMags
method of DatasetMagsDAO (Scala), the code handles different dataset types distinctly:
- Non-WKW datasets have
magsOpt
populated and use the first branch which includes axisOrder, channelIndex, and credentialId. - WKW datasets will have
wkwResolutionsOpt
populated and use the second branch which includes cubeLength. - The final branch is a fallback for legacy data.
This ensures appropriate fields are populated for each dataset type.
webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/tracings/volume/VolumeTracingService.scala (4)
Learnt from: dieknolle3333
PR: #8168
File: frontend/javascripts/oxalis/model/sagas/proofread_saga.ts:1039-1039
Timestamp: 2024-11-22T17:18:04.217Z
Learning: In frontend/javascripts/oxalis/model/sagas/proofread_saga.ts
, when calling getMagInfo
, the use of volumeTracingLayer.resolutions
is intentional and should not be changed to volumeTracingLayer.mags
.
Learnt from: frcroth
PR: #8609
File: app/models/dataset/Dataset.scala:753-775
Timestamp: 2025-05-12T13:07:29.637Z
Learning: In the updateMags
method of DatasetMagsDAO (Scala), the code handles different dataset types distinctly:
- Non-WKW datasets have
magsOpt
populated and use the first branch which includes axisOrder, channelIndex, and credentialId. - WKW datasets will have
wkwResolutionsOpt
populated and use the second branch which includes cubeLength. - The final branch is a fallback for legacy data.
This ensures appropriate fields are populated for each dataset type.
Learnt from: frcroth
PR: #8236
File: webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/mesh/MeshFileService.scala:170-173
Timestamp: 2025-04-23T08:51:57.756Z
Learning: In the webknossos codebase, classes extending FoxImplicits
have access to an implicit conversion from Option[A]
to Fox[A]
, where None
is converted to an empty Fox that fails gracefully in for-comprehensions.
Learnt from: MichaelBuessemeyer
PR: #8352
File: app/models/organization/CreditTransactionService.scala:0-0
Timestamp: 2025-01-27T12:06:42.865Z
Learning: In Scala's for-comprehension with Fox (Future-like type), the <-
operator ensures sequential execution. If any step fails, the entire chain short-circuits and returns early, preventing subsequent operations from executing. This makes it safe to perform validation checks before database operations.
tools/postgres/schema.sql (1)
Learnt from: frcroth
PR: #8609
File: conf/evolutions/133-datasource-properties-in-db.sql:8-16
Timestamp: 2025-05-12T14:15:05.259Z
Learning: The database schema in WEBKNOSSOS has separate tables for dataset layers (dataset_layers
) and magnifications (dataset_mags
). The dataFormat
field is stored in the layers table while magnification-specific fields like cubeLength
(specific to WKW format) are stored in the mags table.
webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/AccessTokenService.scala (1)
Learnt from: frcroth
PR: #8609
File: app/models/dataset/Dataset.scala:753-775
Timestamp: 2025-05-12T13:07:29.637Z
Learning: In the updateMags
method of DatasetMagsDAO (Scala), the code handles different dataset types distinctly:
- Non-WKW datasets have
magsOpt
populated and use the first branch which includes axisOrder, channelIndex, and credentialId. - WKW datasets will have
wkwResolutionsOpt
populated and use the second branch which includes cubeLength. - The final branch is a fallback for legacy data.
This ensures appropriate fields are populated for each dataset type.
webknossos-datastore/app/com/scalableminds/webknossos/datastore/models/datasource/DataLayer.scala (4)
Learnt from: frcroth
PR: #8609
File: app/models/dataset/Dataset.scala:753-775
Timestamp: 2025-05-12T13:07:29.637Z
Learning: In the updateMags
method of DatasetMagsDAO (Scala), the code handles different dataset types distinctly:
- Non-WKW datasets have
magsOpt
populated and use the first branch which includes axisOrder, channelIndex, and credentialId. - WKW datasets will have
wkwResolutionsOpt
populated and use the second branch which includes cubeLength. - The final branch is a fallback for legacy data.
This ensures appropriate fields are populated for each dataset type.
Learnt from: frcroth
PR: #8609
File: conf/evolutions/133-datasource-properties-in-db.sql:8-16
Timestamp: 2025-05-12T14:15:05.259Z
Learning: The database schema in WEBKNOSSOS has separate tables for dataset layers (dataset_layers
) and magnifications (dataset_mags
). The dataFormat
field is stored in the layers table while magnification-specific fields like cubeLength
(specific to WKW format) are stored in the mags table.
Learnt from: dieknolle3333
PR: #8168
File: frontend/javascripts/oxalis/model/sagas/volumetracing_saga.tsx:433-434
Timestamp: 2024-11-22T17:19:07.947Z
Learning: In the codebase, certain usages of segmentationLayer.resolutions
are intentionally retained and should not be changed to segmentationLayer.mags
during refactoring.
Learnt from: dieknolle3333
PR: #8168
File: frontend/javascripts/oxalis/model/sagas/proofread_saga.ts:1039-1039
Timestamp: 2024-11-22T17:18:04.217Z
Learning: In frontend/javascripts/oxalis/model/sagas/proofread_saga.ts
, when calling getMagInfo
, the use of volumeTracingLayer.resolutions
is intentional and should not be changed to volumeTracingLayer.mags
.
frontend/javascripts/types/schemas/datasource.types.ts (1)
Learnt from: dieknolle3333
PR: #8168
File: frontend/javascripts/oxalis/geometries/materials/plane_material_factory.ts:568-585
Timestamp: 2024-11-22T17:18:43.411Z
Learning: In the file frontend/javascripts/oxalis/geometries/materials/plane_material_factory.ts
, the uniform name allResolutions
should remain unchanged to maintain consistency with the shader code.
app/controllers/UserTokenController.scala (4)
Learnt from: frcroth
PR: #8609
File: app/models/dataset/Dataset.scala:753-775
Timestamp: 2025-05-12T13:07:29.637Z
Learning: In the updateMags
method of DatasetMagsDAO (Scala), the code handles different dataset types distinctly:
- Non-WKW datasets have
magsOpt
populated and use the first branch which includes axisOrder, channelIndex, and credentialId. - WKW datasets will have
wkwResolutionsOpt
populated and use the second branch which includes cubeLength. - The final branch is a fallback for legacy data.
This ensures appropriate fields are populated for each dataset type.
Learnt from: frcroth
PR: #8236
File: webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/mesh/MeshFileService.scala:170-173
Timestamp: 2025-04-23T08:51:57.756Z
Learning: In the webknossos codebase, classes extending FoxImplicits
have access to an implicit conversion from Option[A]
to Fox[A]
, where None
is converted to an empty Fox that fails gracefully in for-comprehensions.
Learnt from: MichaelBuessemeyer
PR: #8352
File: app/models/organization/CreditTransactionService.scala:0-0
Timestamp: 2025-01-27T12:06:42.865Z
Learning: In Scala's for-comprehension with Fox (Future-like type), the <-
operator ensures sequential execution. If any step fails, the entire chain short-circuits and returns early, preventing subsequent operations from executing. This makes it safe to perform validation checks before database operations.
Learnt from: frcroth
PR: #8236
File: webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/mesh/NeuroglancerPrecomputedMeshFileService.scala:161-166
Timestamp: 2025-04-28T14:18:04.368Z
Learning: In Scala for-comprehensions with the Fox error handling monad, Fox.fromBool()
expressions should use the <-
binding operator instead of the =
assignment operator to properly propagate error conditions. Using =
will cause validation failures to be silently ignored.
frontend/javascripts/test/backend-snapshot-tests/datasets.e2e.ts (1)
Learnt from: dieknolle3333
PR: #8229
File: frontend/javascripts/oxalis/model/accessors/dataset_accessor.ts:348-354
Timestamp: 2024-11-25T14:38:49.345Z
Learning: For the getDatasetExtentAsProduct
function in dataset_accessor.ts
, input validation for negative or zero dimensions is not necessary.
app/controllers/AnnotationIOController.scala (1)
Learnt from: frcroth
PR: #8609
File: app/models/dataset/Dataset.scala:753-775
Timestamp: 2025-05-12T13:07:29.637Z
Learning: In the updateMags
method of DatasetMagsDAO (Scala), the code handles different dataset types distinctly:
- Non-WKW datasets have
magsOpt
populated and use the first branch which includes axisOrder, channelIndex, and credentialId. - WKW datasets will have
wkwResolutionsOpt
populated and use the second branch which includes cubeLength. - The final branch is a fallback for legacy data.
This ensures appropriate fields are populated for each dataset type.
frontend/javascripts/viewer/model/bucket_data_handling/wkstore_adapter.ts (1)
Learnt from: frcroth
PR: #8609
File: app/models/dataset/Dataset.scala:753-775
Timestamp: 2025-05-12T13:07:29.637Z
Learning: In the updateMags
method of DatasetMagsDAO (Scala), the code handles different dataset types distinctly:
- Non-WKW datasets have
magsOpt
populated and use the first branch which includes axisOrder, channelIndex, and credentialId. - WKW datasets will have
wkwResolutionsOpt
populated and use the second branch which includes cubeLength. - The final branch is a fallback for legacy data.
This ensures appropriate fields are populated for each dataset type.
app/controllers/WKRemoteTracingStoreController.scala (1)
Learnt from: frcroth
PR: #8609
File: app/models/dataset/Dataset.scala:753-775
Timestamp: 2025-05-12T13:07:29.637Z
Learning: In the updateMags
method of DatasetMagsDAO (Scala), the code handles different dataset types distinctly:
- Non-WKW datasets have
magsOpt
populated and use the first branch which includes axisOrder, channelIndex, and credentialId. - WKW datasets will have
wkwResolutionsOpt
populated and use the second branch which includes cubeLength. - The final branch is a fallback for legacy data.
This ensures appropriate fields are populated for each dataset type.
webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/tracings/RemoteFallbackLayer.scala (3)
Learnt from: frcroth
PR: #8202
File: webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/DatasetErrorLoggingService.scala:68-72
Timestamp: 2024-11-25T10:02:03.702Z
Learning: In DatasetErrorLoggingService.scala
, prefer using TextUtils.stackTraceAsString(exception)
when logging exceptions instead of passing the exception directly to logger.error
.
Learnt from: frcroth
PR: #8236
File: webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/mesh/MeshFileService.scala:170-173
Timestamp: 2025-04-23T08:51:57.756Z
Learning: In the webknossos codebase, classes extending FoxImplicits
have access to an implicit conversion from Option[A]
to Fox[A]
, where None
is converted to an empty Fox that fails gracefully in for-comprehensions.
Learnt from: MichaelBuessemeyer
PR: #8352
File: app/models/organization/CreditTransactionService.scala:0-0
Timestamp: 2025-01-27T12:06:42.865Z
Learning: In Scala's for-comprehension with Fox (Future-like type), the <-
operator ensures sequential execution. If any step fails, the entire chain short-circuits and returns early, preventing subsequent operations from executing. This makes it safe to perform validation checks before database operations.
app/models/dataset/explore/WKExploreRemoteLayerService.scala (1)
Learnt from: frcroth
PR: #8609
File: app/models/dataset/Dataset.scala:753-775
Timestamp: 2025-05-12T13:07:29.637Z
Learning: In the updateMags
method of DatasetMagsDAO (Scala), the code handles different dataset types distinctly:
- Non-WKW datasets have
magsOpt
populated and use the first branch which includes axisOrder, channelIndex, and credentialId. - WKW datasets will have
wkwResolutionsOpt
populated and use the second branch which includes cubeLength. - The final branch is a fallback for legacy data.
This ensures appropriate fields are populated for each dataset type.
app/controllers/DatasetController.scala (1)
Learnt from: frcroth
PR: #8609
File: app/models/dataset/Dataset.scala:753-775
Timestamp: 2025-05-12T13:07:29.637Z
Learning: In the updateMags
method of DatasetMagsDAO (Scala), the code handles different dataset types distinctly:
- Non-WKW datasets have
magsOpt
populated and use the first branch which includes axisOrder, channelIndex, and credentialId. - WKW datasets will have
wkwResolutionsOpt
populated and use the second branch which includes cubeLength. - The final branch is a fallback for legacy data.
This ensures appropriate fields are populated for each dataset type.
docs/data/concepts.md (8)
Learnt from: frcroth
PR: #8609
File: conf/evolutions/133-datasource-properties-in-db.sql:8-16
Timestamp: 2025-05-12T14:15:05.259Z
Learning: The database schema in WEBKNOSSOS has separate tables for dataset layers (dataset_layers
) and magnifications (dataset_mags
). The dataFormat
field is stored in the layers table while magnification-specific fields like cubeLength
(specific to WKW format) are stored in the mags table.
Learnt from: dieknolle3333
PR: #8168
File: frontend/javascripts/oxalis/model/sagas/volumetracing_saga.tsx:433-434
Timestamp: 2024-11-22T17:19:07.947Z
Learning: In the codebase, certain usages of segmentationLayer.resolutions
are intentionally retained and should not be changed to segmentationLayer.mags
during refactoring.
Learnt from: dieknolle3333
PR: #8168
File: frontend/javascripts/oxalis/model/sagas/proofread_saga.ts:1039-1039
Timestamp: 2024-11-22T17:18:04.217Z
Learning: In frontend/javascripts/oxalis/model/sagas/proofread_saga.ts
, when calling getMagInfo
, the use of volumeTracingLayer.resolutions
is intentional and should not be changed to volumeTracingLayer.mags
.
Learnt from: frcroth
PR: #8609
File: app/models/dataset/Dataset.scala:753-775
Timestamp: 2025-05-12T13:07:29.637Z
Learning: In the updateMags
method of DatasetMagsDAO (Scala), the code handles different dataset types distinctly:
- Non-WKW datasets have
magsOpt
populated and use the first branch which includes axisOrder, channelIndex, and credentialId. - WKW datasets will have
wkwResolutionsOpt
populated and use the second branch which includes cubeLength. - The final branch is a fallback for legacy data.
This ensures appropriate fields are populated for each dataset type.
Learnt from: MichaelBuessemeyer
PR: #8221
File: CHANGELOG.unreleased.md:14-14
Timestamp: 2024-11-25T14:52:23.201Z
Learning: For the webknossos
project, changelog entries in CHANGELOG.unreleased.md
should be concise and avoid unnecessary detailed explanations.
Learnt from: dieknolle3333
PR: #8168
File: frontend/javascripts/oxalis/model/reducers/skeletontracing_reducer_helpers.ts:125-125
Timestamp: 2024-11-22T17:17:39.914Z
Learning: In the createNode
function within skeletontracing_reducer_helpers.ts
, the property resolution
is intentionally kept even when the parameter is renamed to mag
.
Learnt from: dieknolle3333
PR: #8168
File: frontend/javascripts/oxalis/geometries/materials/plane_material_factory.ts:568-585
Timestamp: 2024-11-22T17:18:43.411Z
Learning: In the file frontend/javascripts/oxalis/geometries/materials/plane_material_factory.ts
, the uniform name allResolutions
should remain unchanged to maintain consistency with the shader code.
Learnt from: frcroth
PR: #8598
File: webknossos-datastore/app/com/scalableminds/webknossos/datastore/models/datasource/DatasetLayerAttachments.scala:89-95
Timestamp: 2025-06-02T09:49:51.047Z
Learning: In WebKnossos dataset layer attachments, multiple file types can safely use the same directory name (like "agglomerates") because the scanning logic filters by file extension. For example, AgglomerateFileInfo scans for .hdf5 files while CumsumFileInfo scans for .json files in the same "agglomerates" directory without interference.
frontend/javascripts/admin/api/mesh.ts (3)
Learnt from: philippotto
PR: #8602
File: frontend/javascripts/oxalis/model/volumetracing/volume_annotation_sampling.ts:365-366
Timestamp: 2025-05-07T06:17:32.810Z
Learning: The parameter in applyVoxelMap was renamed from sliceCount
to sliceOffset
to better reflect its purpose, but this doesn't affect existing call sites since JavaScript/TypeScript function calls are position-based.
Learnt from: dieknolle3333
PR: #8229
File: frontend/javascripts/oxalis/model/accessors/dataset_accessor.ts:348-354
Timestamp: 2024-11-25T14:38:49.345Z
Learning: For the getDatasetExtentAsProduct
function in dataset_accessor.ts
, input validation for negative or zero dimensions is not necessary.
Learnt from: dieknolle3333
PR: #8168
File: frontend/javascripts/oxalis/geometries/materials/plane_material_factory.ts:568-585
Timestamp: 2024-11-22T17:18:43.411Z
Learning: In the file frontend/javascripts/oxalis/geometries/materials/plane_material_factory.ts
, the uniform name allResolutions
should remain unchanged to maintain consistency with the shader code.
frontend/javascripts/dashboard/dataset/dataset_settings_viewconfig_tab.tsx (2)
Learnt from: dieknolle3333
PR: #8229
File: frontend/javascripts/oxalis/model/accessors/dataset_accessor.ts:348-354
Timestamp: 2024-11-25T14:38:49.345Z
Learning: For the getDatasetExtentAsProduct
function in dataset_accessor.ts
, input validation for negative or zero dimensions is not necessary.
Learnt from: frcroth
PR: #8609
File: app/models/dataset/Dataset.scala:753-775
Timestamp: 2025-05-12T13:07:29.637Z
Learning: In the updateMags
method of DatasetMagsDAO (Scala), the code handles different dataset types distinctly:
- Non-WKW datasets have
magsOpt
populated and use the first branch which includes axisOrder, channelIndex, and credentialId. - WKW datasets will have
wkwResolutionsOpt
populated and use the second branch which includes cubeLength. - The final branch is a fallback for legacy data.
This ensures appropriate fields are populated for each dataset type.
app/models/dataset/ComposeService.scala (1)
Learnt from: frcroth
PR: #8609
File: app/models/dataset/Dataset.scala:753-775
Timestamp: 2025-05-12T13:07:29.637Z
Learning: In the updateMags
method of DatasetMagsDAO (Scala), the code handles different dataset types distinctly:
- Non-WKW datasets have
magsOpt
populated and use the first branch which includes axisOrder, channelIndex, and credentialId. - WKW datasets will have
wkwResolutionsOpt
populated and use the second branch which includes cubeLength. - The final branch is a fallback for legacy data.
This ensures appropriate fields are populated for each dataset type.
webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/TSRemoteWebknossosClient.scala (1)
Learnt from: MichaelBuessemeyer
PR: #8236
File: webknossos-datastore/app/com/scalableminds/webknossos/datastore/datareaders/precomputed/NeuroglancerPrecomputedShardingUtils.scala:23-28
Timestamp: 2025-04-25T11:06:13.275Z
Learning: AlfuCache in the WebKnossos codebase has default configured TTL (time-to-live) and size limits, providing automatic protection against unbounded memory growth.
frontend/javascripts/types/schemas/datasource.schema.ts (6)
Learnt from: frcroth
PR: #8609
File: app/models/dataset/Dataset.scala:753-775
Timestamp: 2025-05-12T13:07:29.637Z
Learning: In the updateMags
method of DatasetMagsDAO (Scala), the code handles different dataset types distinctly:
- Non-WKW datasets have
magsOpt
populated and use the first branch which includes axisOrder, channelIndex, and credentialId. - WKW datasets will have
wkwResolutionsOpt
populated and use the second branch which includes cubeLength. - The final branch is a fallback for legacy data.
This ensures appropriate fields are populated for each dataset type.
Learnt from: frcroth
PR: #8609
File: conf/evolutions/133-datasource-properties-in-db.sql:8-16
Timestamp: 2025-05-12T14:15:05.259Z
Learning: The database schema in WEBKNOSSOS has separate tables for dataset layers (dataset_layers
) and magnifications (dataset_mags
). The dataFormat
field is stored in the layers table while magnification-specific fields like cubeLength
(specific to WKW format) are stored in the mags table.
Learnt from: dieknolle3333
PR: #8168
File: frontend/javascripts/oxalis/model/sagas/proofread_saga.ts:1039-1039
Timestamp: 2024-11-22T17:18:04.217Z
Learning: In frontend/javascripts/oxalis/model/sagas/proofread_saga.ts
, when calling getMagInfo
, the use of volumeTracingLayer.resolutions
is intentional and should not be changed to volumeTracingLayer.mags
.
Learnt from: dieknolle3333
PR: #8168
File: frontend/javascripts/oxalis/geometries/materials/plane_material_factory.ts:568-585
Timestamp: 2024-11-22T17:18:43.411Z
Learning: In the file frontend/javascripts/oxalis/geometries/materials/plane_material_factory.ts
, the uniform name allResolutions
should remain unchanged to maintain consistency with the shader code.
Learnt from: dieknolle3333
PR: #8168
File: frontend/javascripts/oxalis/model/reducers/skeletontracing_reducer_helpers.ts:125-125
Timestamp: 2024-11-22T17:17:39.914Z
Learning: In the createNode
function within skeletontracing_reducer_helpers.ts
, the property resolution
is intentionally kept even when the parameter is renamed to mag
.
Learnt from: dieknolle3333
PR: #8168
File: frontend/javascripts/oxalis/model/sagas/volumetracing_saga.tsx:433-434
Timestamp: 2024-11-22T17:19:07.947Z
Learning: In the codebase, certain usages of segmentationLayer.resolutions
are intentionally retained and should not be changed to segmentationLayer.mags
during refactoring.
app/models/dataset/WKRemoteDataStoreClient.scala (1)
Learnt from: frcroth
PR: #8609
File: app/models/dataset/Dataset.scala:753-775
Timestamp: 2025-05-12T13:07:29.637Z
Learning: In the updateMags
method of DatasetMagsDAO (Scala), the code handles different dataset types distinctly:
- Non-WKW datasets have
magsOpt
populated and use the first branch which includes axisOrder, channelIndex, and credentialId. - WKW datasets will have
wkwResolutionsOpt
populated and use the second branch which includes cubeLength. - The final branch is a fallback for legacy data.
This ensures appropriate fields are populated for each dataset type.
app/models/dataset/DatasetService.scala (5)
Learnt from: frcroth
PR: #8609
File: app/models/dataset/Dataset.scala:753-775
Timestamp: 2025-05-12T13:07:29.637Z
Learning: In the updateMags
method of DatasetMagsDAO (Scala), the code handles different dataset types distinctly:
- Non-WKW datasets have
magsOpt
populated and use the first branch which includes axisOrder, channelIndex, and credentialId. - WKW datasets will have
wkwResolutionsOpt
populated and use the second branch which includes cubeLength. - The final branch is a fallback for legacy data.
This ensures appropriate fields are populated for each dataset type.
Learnt from: MichaelBuessemeyer
PR: #8352
File: app/models/organization/CreditTransactionService.scala:0-0
Timestamp: 2025-01-27T12:06:42.865Z
Learning: In Scala's for-comprehension with Fox (Future-like type), the <-
operator ensures sequential execution. If any step fails, the entire chain short-circuits and returns early, preventing subsequent operations from executing. This makes it safe to perform validation checks before database operations.
Learnt from: frcroth
PR: #8236
File: webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/mesh/MeshFileService.scala:170-173
Timestamp: 2025-04-23T08:51:57.756Z
Learning: In the webknossos codebase, classes extending FoxImplicits
have access to an implicit conversion from Option[A]
to Fox[A]
, where None
is converted to an empty Fox that fails gracefully in for-comprehensions.
Learnt from: frcroth
PR: #8236
File: webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/mesh/NeuroglancerPrecomputedMeshFileService.scala:161-166
Timestamp: 2025-04-28T14:18:04.368Z
Learning: In Scala for-comprehensions with the Fox error handling monad, Fox.fromBool()
expressions should use the <-
binding operator instead of the =
assignment operator to properly propagate error conditions. Using =
will cause validation failures to be silently ignored.
Learnt from: frcroth
PR: #8609
File: conf/evolutions/133-datasource-properties-in-db.sql:8-16
Timestamp: 2025-05-12T14:15:05.259Z
Learning: The database schema in WEBKNOSSOS has separate tables for dataset layers (dataset_layers
) and magnifications (dataset_mags
). The dataFormat
field is stored in the layers table while magnification-specific fields like cubeLength
(specific to WKW format) are stored in the mags table.
app/controllers/WKRemoteDataStoreController.scala (1)
Learnt from: frcroth
PR: #8609
File: app/models/dataset/Dataset.scala:753-775
Timestamp: 2025-05-12T13:07:29.637Z
Learning: In the updateMags
method of DatasetMagsDAO (Scala), the code handles different dataset types distinctly:
- Non-WKW datasets have
magsOpt
populated and use the first branch which includes axisOrder, channelIndex, and credentialId. - WKW datasets will have
wkwResolutionsOpt
populated and use the second branch which includes cubeLength. - The final branch is a fallback for legacy data.
This ensures appropriate fields are populated for each dataset type.
app/models/dataset/Dataset.scala (2)
Learnt from: frcroth
PR: #8609
File: app/models/dataset/Dataset.scala:753-775
Timestamp: 2025-05-12T13:07:29.637Z
Learning: In the updateMags
method of DatasetMagsDAO (Scala), the code handles different dataset types distinctly:
- Non-WKW datasets have
magsOpt
populated and use the first branch which includes axisOrder, channelIndex, and credentialId. - WKW datasets will have
wkwResolutionsOpt
populated and use the second branch which includes cubeLength. - The final branch is a fallback for legacy data.
This ensures appropriate fields are populated for each dataset type.
Learnt from: frcroth
PR: #8609
File: conf/evolutions/133-datasource-properties-in-db.sql:8-16
Timestamp: 2025-05-12T14:15:05.259Z
Learning: The database schema in WEBKNOSSOS has separate tables for dataset layers (dataset_layers
) and magnifications (dataset_mags
). The dataFormat
field is stored in the layers table while magnification-specific fields like cubeLength
(specific to WKW format) are stored in the mags table.
webknossos-datastore/app/com/scalableminds/webknossos/datastore/helpers/DatasetDeleter.scala (3)
Learnt from: frcroth
PR: #8609
File: app/models/dataset/Dataset.scala:753-775
Timestamp: 2025-05-12T13:07:29.637Z
Learning: In the updateMags
method of DatasetMagsDAO (Scala), the code handles different dataset types distinctly:
- Non-WKW datasets have
magsOpt
populated and use the first branch which includes axisOrder, channelIndex, and credentialId. - WKW datasets will have
wkwResolutionsOpt
populated and use the second branch which includes cubeLength. - The final branch is a fallback for legacy data.
This ensures appropriate fields are populated for each dataset type.
Learnt from: frcroth
PR: #8236
File: webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/mesh/MeshFileService.scala:170-173
Timestamp: 2025-04-23T08:51:57.756Z
Learning: In the webknossos codebase, classes extending FoxImplicits
have access to an implicit conversion from Option[A]
to Fox[A]
, where None
is converted to an empty Fox that fails gracefully in for-comprehensions.
Learnt from: MichaelBuessemeyer
PR: #8352
File: app/models/organization/CreditTransactionService.scala:0-0
Timestamp: 2025-01-27T12:06:42.865Z
Learning: In Scala's for-comprehension with Fox (Future-like type), the <-
operator ensures sequential execution. If any step fails, the entire chain short-circuits and returns early, preventing subsequent operations from executing. This makes it safe to perform validation checks before database operations.
app/models/annotation/AnnotationService.scala (2)
Learnt from: frcroth
PR: #8609
File: app/models/dataset/Dataset.scala:753-775
Timestamp: 2025-05-12T13:07:29.637Z
Learning: In the updateMags
method of DatasetMagsDAO (Scala), the code handles different dataset types distinctly:
- Non-WKW datasets have
magsOpt
populated and use the first branch which includes axisOrder, channelIndex, and credentialId. - WKW datasets will have
wkwResolutionsOpt
populated and use the second branch which includes cubeLength. - The final branch is a fallback for legacy data.
This ensures appropriate fields are populated for each dataset type.
Learnt from: dieknolle3333
PR: #8168
File: frontend/javascripts/oxalis/model/sagas/proofread_saga.ts:1039-1039
Timestamp: 2024-11-22T17:18:04.217Z
Learning: In frontend/javascripts/oxalis/model/sagas/proofread_saga.ts
, when calling getMagInfo
, the use of volumeTracingLayer.resolutions
is intentional and should not be changed to volumeTracingLayer.mags
.
webknossos-datastore/conf/datastore.versioned.routes (1)
Learnt from: frcroth
PR: #8609
File: app/models/dataset/Dataset.scala:753-775
Timestamp: 2025-05-12T13:07:29.637Z
Learning: In the updateMags
method of DatasetMagsDAO (Scala), the code handles different dataset types distinctly:
- Non-WKW datasets have
magsOpt
populated and use the first branch which includes axisOrder, channelIndex, and credentialId. - WKW datasets will have
wkwResolutionsOpt
populated and use the second branch which includes cubeLength. - The final branch is a fallback for legacy data.
This ensures appropriate fields are populated for each dataset type.
webknossos-datastore/app/com/scalableminds/webknossos/datastore/controllers/DSMeshController.scala (1)
Learnt from: frcroth
PR: #8609
File: app/models/dataset/Dataset.scala:753-775
Timestamp: 2025-05-12T13:07:29.637Z
Learning: In the updateMags
method of DatasetMagsDAO (Scala), the code handles different dataset types distinctly:
- Non-WKW datasets have
magsOpt
populated and use the first branch which includes axisOrder, channelIndex, and credentialId. - WKW datasets will have
wkwResolutionsOpt
populated and use the second branch which includes cubeLength. - The final branch is a fallback for legacy data.
This ensures appropriate fields are populated for each dataset type.
webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/mesh/DSFullMeshService.scala (2)
Learnt from: frcroth
PR: #8236
File: webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/mesh/MeshFileService.scala:170-173
Timestamp: 2025-04-23T08:51:57.756Z
Learning: In the webknossos codebase, classes extending FoxImplicits
have access to an implicit conversion from Option[A]
to Fox[A]
, where None
is converted to an empty Fox that fails gracefully in for-comprehensions.
Learnt from: MichaelBuessemeyer
PR: #8352
File: app/models/organization/CreditTransactionService.scala:0-0
Timestamp: 2025-01-27T12:06:42.865Z
Learning: In Scala's for-comprehension with Fox (Future-like type), the <-
operator ensures sequential execution. If any step fails, the entire chain short-circuits and returns early, preventing subsequent operations from executing. This makes it safe to perform validation checks before database operations.
webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/DSRemoteWebknossosClient.scala (2)
Learnt from: MichaelBuessemeyer
PR: #8352
File: app/models/organization/CreditTransactionService.scala:0-0
Timestamp: 2025-01-27T12:06:42.865Z
Learning: In Scala's for-comprehension with Fox (Future-like type), the <-
operator ensures sequential execution. If any step fails, the entire chain short-circuits and returns early, preventing subsequent operations from executing. This makes it safe to perform validation checks before database operations.
Learnt from: frcroth
PR: #8236
File: webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/mesh/MeshFileService.scala:170-173
Timestamp: 2025-04-23T08:51:57.756Z
Learning: In the webknossos codebase, classes extending FoxImplicits
have access to an implicit conversion from Option[A]
to Fox[A]
, where None
is converted to an empty Fox that fails gracefully in for-comprehensions.
conf/webknossos.latest.routes (1)
Learnt from: frcroth
PR: #8609
File: app/models/dataset/Dataset.scala:753-775
Timestamp: 2025-05-12T13:07:29.637Z
Learning: In the updateMags
method of DatasetMagsDAO (Scala), the code handles different dataset types distinctly:
- Non-WKW datasets have
magsOpt
populated and use the first branch which includes axisOrder, channelIndex, and credentialId. - WKW datasets will have
wkwResolutionsOpt
populated and use the second branch which includes cubeLength. - The final branch is a fallback for legacy data.
This ensures appropriate fields are populated for each dataset type.
webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/TSRemoteDatastoreClient.scala (4)
Learnt from: frcroth
PR: #8609
File: app/models/dataset/Dataset.scala:753-775
Timestamp: 2025-05-12T13:07:29.637Z
Learning: In the updateMags
method of DatasetMagsDAO (Scala), the code handles different dataset types distinctly:
- Non-WKW datasets have
magsOpt
populated and use the first branch which includes axisOrder, channelIndex, and credentialId. - WKW datasets will have
wkwResolutionsOpt
populated and use the second branch which includes cubeLength. - The final branch is a fallback for legacy data.
This ensures appropriate fields are populated for each dataset type.
Learnt from: MichaelBuessemeyer
PR: #8236
File: webknossos-datastore/app/com/scalableminds/webknossos/datastore/datareaders/precomputed/NeuroglancerPrecomputedShardingUtils.scala:23-28
Timestamp: 2025-04-25T11:06:13.275Z
Learning: AlfuCache in the WebKnossos codebase has default configured TTL (time-to-live) and size limits, providing automatic protection against unbounded memory growth.
Learnt from: frcroth
PR: #8236
File: webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/mesh/MeshFileService.scala:170-173
Timestamp: 2025-04-23T08:51:57.756Z
Learning: In the webknossos codebase, classes extending FoxImplicits
have access to an implicit conversion from Option[A]
to Fox[A]
, where None
is converted to an empty Fox that fails gracefully in for-comprehensions.
Learnt from: MichaelBuessemeyer
PR: #8352
File: app/models/organization/CreditTransactionService.scala:0-0
Timestamp: 2025-01-27T12:06:42.865Z
Learning: In Scala's for-comprehension with Fox (Future-like type), the <-
operator ensures sequential execution. If any step fails, the entire chain short-circuits and returns early, preventing subsequent operations from executing. This makes it safe to perform validation checks before database operations.
webknossos-datastore/app/com/scalableminds/webknossos/datastore/dataformats/layers/WKWDataLayers.scala (5)
Learnt from: frcroth
PR: #8609
File: app/models/dataset/Dataset.scala:753-775
Timestamp: 2025-05-12T13:07:29.637Z
Learning: In the updateMags
method of DatasetMagsDAO (Scala), the code handles different dataset types distinctly:
- Non-WKW datasets have
magsOpt
populated and use the first branch which includes axisOrder, channelIndex, and credentialId. - WKW datasets will have
wkwResolutionsOpt
populated and use the second branch which includes cubeLength. - The final branch is a fallback for legacy data.
This ensures appropriate fields are populated for each dataset type.
Learnt from: dieknolle3333
PR: #8168
File: frontend/javascripts/oxalis/model/sagas/volumetracing_saga.tsx:433-434
Timestamp: 2024-11-22T17:19:07.947Z
Learning: In the codebase, certain usages of segmentationLayer.resolutions
are intentionally retained and should not be changed to segmentationLayer.mags
during refactoring.
Learnt from: frcroth
PR: #8609
File: conf/evolutions/133-datasource-properties-in-db.sql:8-16
Timestamp: 2025-05-12T14:15:05.259Z
Learning: The database schema in WEBKNOSSOS has separate tables for dataset layers (dataset_layers
) and magnifications (dataset_mags
). The dataFormat
field is stored in the layers table while magnification-specific fields like cubeLength
(specific to WKW format) are stored in the mags table.
Learnt from: dieknolle3333
PR: #8168
File: frontend/javascripts/oxalis/model/sagas/proofread_saga.ts:1039-1039
Timestamp: 2024-11-22T17:18:04.217Z
Learning: In frontend/javascripts/oxalis/model/sagas/proofread_saga.ts
, when calling getMagInfo
, the use of volumeTracingLayer.resolutions
is intentional and should not be changed to volumeTracingLayer.mags
.
Learnt from: dieknolle3333
PR: #8168
File: frontend/javascripts/oxalis/model/reducers/skeletontracing_reducer_helpers.ts:125-125
Timestamp: 2024-11-22T17:17:39.914Z
Learning: In the createNode
function within skeletontracing_reducer_helpers.ts
, the property resolution
is intentionally kept even when the parameter is renamed to mag
.
webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/ZarrStreamingService.scala (1)
Learnt from: frcroth
PR: #8609
File: app/models/dataset/Dataset.scala:753-775
Timestamp: 2025-05-12T13:07:29.637Z
Learning: In the updateMags
method of DatasetMagsDAO (Scala), the code handles different dataset types distinctly:
- Non-WKW datasets have
magsOpt
populated and use the first branch which includes axisOrder, channelIndex, and credentialId. - WKW datasets will have
wkwResolutionsOpt
populated and use the second branch which includes cubeLength. - The final branch is a fallback for legacy data.
This ensures appropriate fields are populated for each dataset type.
webknossos-datastore/app/com/scalableminds/webknossos/datastore/controllers/ZarrStreamingController.scala (1)
Learnt from: frcroth
PR: #8609
File: app/models/dataset/Dataset.scala:753-775
Timestamp: 2025-05-12T13:07:29.637Z
Learning: In the updateMags
method of DatasetMagsDAO (Scala), the code handles different dataset types distinctly:
- Non-WKW datasets have
magsOpt
populated and use the first branch which includes axisOrder, channelIndex, and credentialId. - WKW datasets will have
wkwResolutionsOpt
populated and use the second branch which includes cubeLength. - The final branch is a fallback for legacy data.
This ensures appropriate fields are populated for each dataset type.
webknossos-datastore/app/com/scalableminds/webknossos/datastore/controllers/BinaryDataController.scala (4)
Learnt from: frcroth
PR: #8609
File: app/models/dataset/Dataset.scala:753-775
Timestamp: 2025-05-12T13:07:29.637Z
Learning: In the updateMags
method of DatasetMagsDAO (Scala), the code handles different dataset types distinctly:
- Non-WKW datasets have
magsOpt
populated and use the first branch which includes axisOrder, channelIndex, and credentialId. - WKW datasets will have
wkwResolutionsOpt
populated and use the second branch which includes cubeLength. - The final branch is a fallback for legacy data.
This ensures appropriate fields are populated for each dataset type.
Learnt from: frcroth
PR: #8236
File: webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/mesh/MeshFileService.scala:170-173
Timestamp: 2025-04-23T08:51:57.756Z
Learning: In the webknossos codebase, classes extending FoxImplicits
have access to an implicit conversion from Option[A]
to Fox[A]
, where None
is converted to an empty Fox that fails gracefully in for-comprehensions.
Learnt from: MichaelBuessemeyer
PR: #8352
File: app/models/organization/CreditTransactionService.scala:0-0
Timestamp: 2025-01-27T12:06:42.865Z
Learning: In Scala's for-comprehension with Fox (Future-like type), the <-
operator ensures sequential execution. If any step fails, the entire chain short-circuits and returns early, preventing subsequent operations from executing. This makes it safe to perform validation checks before database operations.
Learnt from: MichaelBuessemeyer
PR: #8325
File: webknossos-datastore/app/com/scalableminds/webknossos/datastore/image/ImageCreator.scala:0-0
Timestamp: 2025-02-24T08:22:51.814Z
Learning: In ImageCreator.scala, precision loss during uint64 to double conversion is acceptable because the values are only used for image generation where each color component is ultimately represented as a single byte in ARGB format. For grayscale values (except uint24 which represents RGB), each color only uses 8 bits anyway.
webknossos-datastore/conf/datastore.latest.routes (3)
Learnt from: frcroth
PR: #8609
File: app/models/dataset/Dataset.scala:753-775
Timestamp: 2025-05-12T13:07:29.637Z
Learning: In the updateMags
method of DatasetMagsDAO (Scala), the code handles different dataset types distinctly:
- Non-WKW datasets have
magsOpt
populated and use the first branch which includes axisOrder, channelIndex, and credentialId. - WKW datasets will have
wkwResolutionsOpt
populated and use the second branch which includes cubeLength. - The final branch is a fallback for legacy data.
This ensures appropriate fields are populated for each dataset type.
Learnt from: frcroth
PR: #8598
File: webknossos-datastore/app/com/scalableminds/webknossos/datastore/models/datasource/DatasetLayerAttachments.scala:89-95
Timestamp: 2025-06-02T09:49:51.047Z
Learning: In WebKnossos dataset layer attachments, multiple file types can safely use the same directory name (like "agglomerates") because the scanning logic filters by file extension. For example, AgglomerateFileInfo scans for .hdf5 files while CumsumFileInfo scans for .json files in the same "agglomerates" directory without interference.
Learnt from: frcroth
PR: #8609
File: conf/evolutions/133-datasource-properties-in-db.sql:8-16
Timestamp: 2025-05-12T14:15:05.259Z
Learning: The database schema in WEBKNOSSOS has separate tables for dataset layers (dataset_layers
) and magnifications (dataset_mags
). The dataFormat
field is stored in the layers table while magnification-specific fields like cubeLength
(specific to WKW format) are stored in the mags table.
frontend/javascripts/admin/rest_api.ts (1)
Learnt from: dieknolle3333
PR: #8229
File: frontend/javascripts/oxalis/model/accessors/dataset_accessor.ts:348-354
Timestamp: 2024-11-25T14:38:49.345Z
Learning: For the getDatasetExtentAsProduct
function in dataset_accessor.ts
, input validation for negative or zero dimensions is not necessary.
conf/webknossos.versioned.routes (2)
Learnt from: frcroth
PR: #8609
File: app/models/dataset/Dataset.scala:753-775
Timestamp: 2025-05-12T13:07:29.637Z
Learning: In the updateMags
method of DatasetMagsDAO (Scala), the code handles different dataset types distinctly:
- Non-WKW datasets have
magsOpt
populated and use the first branch which includes axisOrder, channelIndex, and credentialId. - WKW datasets will have
wkwResolutionsOpt
populated and use the second branch which includes cubeLength. - The final branch is a fallback for legacy data.
This ensures appropriate fields are populated for each dataset type.
Learnt from: frcroth
PR: #8202
File: webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/DatasetErrorLoggingService.scala:68-72
Timestamp: 2024-11-25T10:02:03.702Z
Learning: In DatasetErrorLoggingService.scala
, prefer using TextUtils.stackTraceAsString(exception)
when logging exceptions instead of passing the exception directly to logger.error
.
webknossos-datastore/app/com/scalableminds/webknossos/datastore/controllers/DataSourceController.scala (3)
Learnt from: frcroth
PR: #8609
File: app/models/dataset/Dataset.scala:753-775
Timestamp: 2025-05-12T13:07:29.637Z
Learning: In the updateMags
method of DatasetMagsDAO (Scala), the code handles different dataset types distinctly:
- Non-WKW datasets have
magsOpt
populated and use the first branch which includes axisOrder, channelIndex, and credentialId. - WKW datasets will have
wkwResolutionsOpt
populated and use the second branch which includes cubeLength. - The final branch is a fallback for legacy data.
This ensures appropriate fields are populated for each dataset type.
Learnt from: frcroth
PR: #8236
File: webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/mesh/MeshFileService.scala:170-173
Timestamp: 2025-04-23T08:51:57.756Z
Learning: In the webknossos codebase, classes extending FoxImplicits
have access to an implicit conversion from Option[A]
to Fox[A]
, where None
is converted to an empty Fox that fails gracefully in for-comprehensions.
Learnt from: MichaelBuessemeyer
PR: #8352
File: app/models/organization/CreditTransactionService.scala:0-0
Timestamp: 2025-01-27T12:06:42.865Z
Learning: In Scala's for-comprehension with Fox (Future-like type), the <-
operator ensures sequential execution. If any step fails, the entire chain short-circuits and returns early, preventing subsequent operations from executing. This makes it safe to perform validation checks before database operations.
🧬 Code Graph Analysis (6)
webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/DatasetCache.scala (2)
webknossos-datastore/app/com/scalableminds/webknossos/datastore/controllers/DataSourceController.scala (1)
invalidateCache
(685-690)app/models/job/Job.scala (1)
datasetId
(55-55)
frontend/javascripts/dashboard/dataset/dataset_settings_delete_tab.tsx (1)
frontend/javascripts/admin/rest_api.ts (1)
deleteDatasetOnDisk
(1347-1357)
frontend/javascripts/admin/dataset/composition_wizard/04_configure_new_dataset.tsx (1)
frontend/javascripts/admin/rest_api.ts (1)
createDatasetComposition
(1104-1122)
frontend/javascripts/viewer/view/right-border-tabs/connectome_tab/connectome_view.tsx (1)
frontend/javascripts/types/api_types.ts (1)
APIDataset
(242-245)
frontend/javascripts/dashboard/dataset/dataset_settings_view.tsx (1)
frontend/javascripts/admin/rest_api.ts (1)
updateDatasetDatasource
(984-995)
app/models/annotation/AnnotationService.scala (2)
app/models/dataset/WKRemoteDataStoreClient.scala (1)
hasSegmentIndexFile
(80-90)util/src/main/scala/com/scalableminds/util/objectid/ObjectId.scala (1)
toString
(12-12)
🪛 LanguageTool
docs/data/concepts.md
[style] ~104-~104: For conciseness, consider replacing this expression with an adverb.
Context: ...ynonymously for mags throughout the UI. At the moment, WebKnossos guarantees correct renderin...
(AT_THE_MOMENT)
🪛 markdownlint-cli2 (0.17.2)
docs/data/concepts.md
125-125: Unordered list style
Expected: dash; Actual: plus
(MD004, ul-style)
126-126: Unordered list style
Expected: dash; Actual: asterisk
(MD004, ul-style)
127-127: Unordered list style
Expected: dash; Actual: asterisk
(MD004, ul-style)
🔇 Additional comments (115)
unreleased_changes/8708.md (1)
1-6
: LGTM! Well-structured changelog entry.The changelog appropriately documents the key behavioral changes and references the required database evolution script for deployment tracking.
webknossos-tracingstore/conf/tracingstore.versioned.routes (1)
3-3
: LGTM! Proper API versioning for breaking changes.Adding the v10 route prefix is appropriate for the dataset identification refactor and follows the established versioning pattern.
webknossos-datastore/app/com/scalableminds/webknossos/datastore/DataStoreModule.scala (1)
67-67
: LGTM! Proper dependency injection binding.The eager singleton binding for
ZarrStreamingService
follows the established pattern and is necessary for the service to be available for injection.frontend/javascripts/viewer/model/sagas/load_histogram_data_saga.ts (1)
40-40
: LGTM! Consistent with dataset ID refactor.The change from passing the full
dataset
object todataset.id
aligns with the broader refactor to unify dataset identification using dataset IDs instead of composite identifiers.frontend/javascripts/dashboard/dataset/dataset_settings_delete_tab.tsx (1)
45-45
: LGTM! Simplified API call with unified dataset identification.The change from constructing a separate
dataSourceId
object to passing the fulldataset
object simplifies the API call and aligns with the dataset ID-based identification refactor throughout the codebase.tools/postgres/schema.sql (2)
24-24
: Schema version increment is appropriate.The version bump to 136 correctly reflects the structural change to the datasets table.
110-110
: Well-designed virtual dataset column addition.The
isVirtual
boolean column is properly structured with:
NOT NULL
constraint for data integrityDEFAULT FALSE
ensuring backward compatibility- Logical placement within the datasets table schema
This aligns perfectly with the PR's virtual dataset functionality.
frontend/javascripts/test/model/binary/layers/wkstore_adapter.spec.ts (2)
39-39
: Correct addition of dataset ID to mock data.The mock dataset state properly includes the new
id
field, ensuring tests reflect the updated dataset identification scheme.
162-162
: Consistent URL pattern updates in test assertions.All test expectations have been systematically updated from the previous
organization/datasetPath
pattern to usedatasetId
, correctly reflecting the new unified dataset identification approach.Also applies to: 166-166, 198-198, 218-218, 237-237
frontend/javascripts/viewer/view/right-border-tabs/segments_tab/segments_view_helper.tsx (1)
79-79
: Correct update to unified dataset identification.The change from
dataset.directoryName
todataset.id
properly aligns with the system-wide refactor to use dataset IDs as the primary identifier, and the removal of the organization parameter matches the updatedhasSegmentIndexInDataStore
API signature.webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/DatasetCache.scala (1)
29-29
: Improved method signature with direct ObjectId parameter.The change from
String
toObjectId
parameter is excellent because it:
- Eliminates potential conversion errors
- Provides better type safety
- Aligns with controller usage patterns (as seen in
DataSourceController.scala
)- Maintains consistency with the cache's
ObjectId
key typeconf/webknossos.versioned.routes (2)
6-7
: Clear documentation of API version changes.The version log entry accurately describes the dataset identification changes in v10, providing helpful context for API consumers.
18-18
: Proper API versioning implementation.The v10 route is correctly:
- Placed above v9 for proper routing priority
- Mapped to
webknossos.latest.Routes
following established patterns- Supporting the unified dataset ID approach described in the PR objectives
webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/tracings/volume/VolumeTracingService.scala (1)
977-979
: LGTM! Clean migration to dataset ID-based identification.The changes correctly implement the transition from using
dataSource.id
todatasetId
forRemoteFallbackLayer
creation. The approach maintains the existing functionality while aligning with the broader codebase refactor to unified dataset identification.frontend/javascripts/viewer/view/right-border-tabs/connectome_tab/connectome_view.tsx (2)
15-15
: Import statement correctly updated for dataset ID refactoring.The removal of
APIDataSourceId
from imports aligns with the transition to unified dataset identification usingAPIDataset
.
353-353
: Type annotation correctly updated to use APIDataset.The change from
APIDataSourceId
toAPIDataset
in thefetchProperties
tuple type is consistent with the codebase-wide migration to dataset ID-based identification and matches the actual usage on line 355 wheredataset
is passed.webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/uploading/UploadService.scala (2)
444-444
: LGTM: Clean API adaptation for optional dataset ID parameter.The explicit
None
arguments align with the updateddeleteOnDisk
method signature that now accepts an optionaldatasetId: Option[ObjectId]
parameter. This maintains the existing cleanup behavior while supporting the new unified dataset identification approach.Also applies to: 452-452
499-499
: LGTM: Method rename clarifies on-disk operations.The change from
updateDataSource
toupdateDataSourceOnDisk
provides better clarity about the method's purpose and distinguishes it from remote operations, aligning with the broader refactor.frontend/javascripts/viewer/model/sagas/meshes/precomputed_mesh_saga.ts (1)
287-287
: LGTM: Parameter updates align with unified dataset identification.The changes from passing the full
dataset
object to justdataset.id
align with the backend refactor to use unified dataset IDs instead of composite organization/dataset name identifiers. The mesh API functions now correctly receive the dataset ID parameter they expect.Also applies to: 367-367
conf/evolutions/reversions/136-virtual-datasets.sql (1)
1-12
: LGTM: Well-structured database reversion script.The migration properly:
- Validates the current schema version (136)
- Drops the
isVirtual
column from the datasets table- Recreates the
datasets_
view to maintain filtering on non-deleted datasets- Updates the schema version back to 135
- Uses transaction boundaries for atomicity
The script correctly handles the reversion of virtual dataset schema changes.
webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/DataSourceService.scala (1)
272-272
: LGTM: Method rename improves clarity.The rename from
updateDataSource
toupdateDataSourceOnDisk
better communicates the method's purpose of persisting changes to disk, distinguishing it from in-memory or remote operations. This aligns with the broader refactor and improves code readability.frontend/javascripts/admin/dataset/composition_wizard/04_configure_new_dataset.tsx (1)
171-178
: LGTM: API call simplification aligns with backend changes.The removal of the explicit datastore URL argument simplifies the call to match the updated
createDatasetComposition
API signature. The function now correctly receives only the dataset composition parameters object, aligning with the backend's unified dataset identification approach.conf/evolutions/136-virtual-datasets.sql (1)
1-12
: LGTM! Well-structured database migration.The migration script correctly adds support for virtual datasets by adding an
isVirtual
boolean column with proper defaults and constraints. The transaction boundaries, schema version validation, and view recreation are all handled appropriately.frontend/javascripts/types/schemas/datasource.types.ts (1)
17-19
: Good refactoring to use intersection types.The change from an explicit object type to an intersection type with
BaseRemoteLayer
reduces code duplication and improves maintainability. TheboundingBox
property is now inherited fromBaseRemoteLayer
, which is more consistent with the other data layer types.frontend/javascripts/viewer/model/bucket_data_handling/wkstore_adapter.ts (1)
100-105
: LGTM! Simplified URL construction with unified dataset ID.The change from using separate
dataset.directoryName
anddataset.owningOrganization
to a singledataset.id
simplifies the URL construction and aligns with the system-wide migration to unified dataset identification.frontend/javascripts/test/backend-snapshot-tests/datasets.e2e.ts (4)
99-107
: Good addition of dynamic dataset ID resolution.The helper function
getTestDatasetId
improves test robustness by dynamically resolving dataset IDs instead of relying on hardcoded values. The error handling for missing datasets is appropriate.
110-123
: LGTM! Updated test to use dynamic dataset ID.The Zarr streaming test correctly uses the resolved dataset ID in template literals, making it more maintainable and aligned with the unified dataset identification approach.
125-145
: LGTM! Updated test to use dynamic dataset ID.The Zarr 3 streaming test correctly uses the resolved dataset ID in template literals, consistent with the refactoring approach.
147-177
: LGTM! Updated test to use dynamic dataset ID.The Dataset Paths test correctly uses the resolved dataset ID, maintaining consistency with the system-wide migration to unified dataset identification.
webknossos-datastore/app/com/scalableminds/webknossos/datastore/models/datasource/DataLayer.scala (2)
493-508
: LGTM! Added WKW layer support to DataLayerWithMagLocators trait.The addition of
WKWDataLayer
andWKWSegmentationLayer
cases to themapped
method follows the same pattern as other layer types, properly applying transformations and returning updated copies. This integrates WKW layers into the unified data layer framework.
522-523
: LGTM! Added WKW layer support to getMags method.The addition of
WKWDataLayer
andWKWSegmentationLayer
cases to thegetMags
method is consistent with other layer types and properly returns themags
list. This completes the integration of WKW layers into theDataLayerWithMagLocators
trait.app/controllers/UserTokenController.scala (1)
185-198
: LGTM! Consistent implementation of dataset delete access control.The implementation follows the established pattern from
handleDataSourceAccess
and includes all necessary validations:
- Configuration check for delete feature enablement
- Dataset existence validation with global access context
- User authentication and organization/admin permission checks
The match expression properly handles the new delete access mode.
webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/AccessTokenService.scala (1)
46-56
: LGTM! Clean addition of ObjectId-based access request methods.The overloaded methods provide a clean transition to dataset ID-centric access control by converting
ObjectId
to string and reusing the existingDataSourceId
construction pattern. This maintains consistency with the broader refactoring to use dataset IDs throughout the codebase.app/controllers/DatasetController.scala (3)
92-92
: LGTM! ComposeService dependency injection aligns with virtual dataset functionality.The addition of
ComposeService
to the constructor properly enables the new dataset composition capabilities.
149-153
: LGTM! Method name change reflects shift to database-based dataset creation.The change from
addRemoteDatasource
toaddRemoteDatasourceToDatabase
indicates a transition from RPC-based remote calls to direct database operations for virtual dataset creation, which aligns with the virtual dataset implementation.
494-500
: LGTM! Clean implementation of dataset composition endpoint.The
compose
endpoint properly:
- Uses secured action with JSON validation
- Delegates to
ComposeService.composeDataset
- Returns the new dataset ID in a structured JSON response
- Includes appropriate error handling with custom error key
The implementation follows established controller patterns in the codebase.
frontend/javascripts/dashboard/dataset/dataset_settings_view.tsx (2)
369-369
: LGTM! Parameter update aligns with backend dataset ID refactoring.The change correctly updates the
updateDatasetDatasource
call to:
- Use
dataset.id
instead ofdataset.directoryName
- Match the new parameter order
(dataStoreUrl, dataSource, datasetId)
This aligns with the backend's shift from directory-name-based to ID-based dataset identification.
586-586
: LGTM! Passing full dataset object provides richer context.Changing from
dataSourceId={maybeDataSourceId}
todataset={this.state.dataset}
provides theDatasetSettingsViewConfigTab
component with the complete dataset object rather than partial identifier information, enabling more comprehensive dataset handling.docs/data/concepts.md (3)
26-26
: LGTM! Terminology update aligns with codebase refactoring.The change from "resolutions" to "mags" consistently reflects the backend refactoring throughout the WEBKNOSSOS codebase that replaced
wkwResolutions
withmags
for magnification steps.
72-94
: LGTM! JSON metadata example properly reflects new mags structure.The updated JSON example correctly shows:
mags
array replacingwkwResolutions
mag
field (3-tuple array) instead ofresolution
path
field for data location- Consistent structure across both color and segmentation layers
This aligns with the backend schema changes for WKW data layers.
103-105
: LGTM! Clear explanation of new mags structure.The documentation properly explains that
mag
is always a 3-element array andpath
specifies data location. The terminology update from "resolutions" to "mags" is consistent throughout.app/models/dataset/explore/WKExploreRemoteLayerService.scala (1)
108-124
: Method refactoring looks good with improved consistency.The rename from
addRemoteDatasource
toaddRemoteDatasourceToDatabase
better reflects the method's purpose. The elimination of organization lookup and token generation simplifies the logic while maintaining functionality through the newdatasetService.createVirtualDataset
approach.The parameter changes are consistent with the broader dataset ID unification effort.
app/controllers/AnnotationIOController.scala (2)
340-342
: Parameter simplification aligns with dataset ID unification.The change from
organizationId
anddatasetName
parameters to a singledatasetId
string parameter simplifies the method signature and aligns with the broader refactoring effort to use dataset IDs consistently across the codebase.
365-365
: Consistent parameter usage in RPC call.The call to
remoteDataStoreClient.hasSegmentIndexFile
correctly uses the newdatasetId
parameter, maintaining consistency with the updated method signature.frontend/javascripts/dashboard/dataset/dataset_settings_viewconfig_tab.tsx (3)
36-40
: Props change improves component design.Changing from
dataSourceId
to accepting the fulldataset
object provides better context and enables more comprehensive validation. The prop typing with| null | undefined
properly handles loading states.
46-100
: Comprehensive refactoring with proper error handling.The
validateDefaultMappings
function refactoring is well-implemented:
- Proper null check for dataset prevents runtime errors
- API calls updated to use dataset object consistently
- Error handling maintains user-friendly messages
- Caching mechanism preserved
The async validation logic correctly handles the dataset dependency.
312-312
: Consistent parameter passing.The call to
validateDefaultMappings
correctly passes thedataset
prop instead of the previousdataSourceId
, maintaining consistency with the updated function signature.app/controllers/WKRemoteTracingStoreController.scala (2)
124-133
: Method rename and simplification improve clarity.The rename from
dataSourceIdForAnnotation
todatasetIdForAnnotation
better reflects the returned value. The simplified logic that directly returnsdataset._id.toString
eliminates unnecessary organization lookups while maintaining the required functionality.
149-158
: Direct dataset ID usage simplifies method logic.The
dataStoreUriForDataset
method now acceptsdatasetId: ObjectId
directly, eliminating the need for organization-based fallback logic. This streamlines the code while maintaining the same functionality through direct dataset lookup.app/models/annotation/AnnotationService.scala (4)
121-124
: Method signature update aligns with dataset ID unification.The addition of
datasetId: ObjectId
parameter tocreateVolumeTracing
is consistent with the broader refactoring effort to use dataset IDs throughout the codebase instead of separate organization and dataset name components.
142-142
: RPC call parameter correctly updated.The call to
remoteDatastoreClient.hasSegmentIndexFile
correctly usesdatasetId.toString
andlayer.name
, consistent with the updated client method signature shown in the relevant code snippets.
241-241
: Call site properly updated with new parameter.The call to
createVolumeTracing
increateTracingForExplorational
correctly passesdataset._id
for the newdatasetId
parameter, maintaining consistency with the updated method signature.
435-435
: Additional call site correctly updated.The call to
createVolumeTracing
increateVolumeTracingBase
properly passes thedatasetId
parameter, ensuring all call sites are consistently updated with the new method signature.frontend/javascripts/types/schemas/datasource.schema.ts (1)
52-87
: Schema changes look consistent and well-structured.The renaming of
wkwResolutions
tomags
andresolution
tomag
aligns with the broader refactor to unify dataset identification. The addition of new properties (path
,credentials
,axisOrder
) with proper typing and the updated required fields maintain schema consistency across all data layer formats.webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/TSRemoteWebknossosClient.scala (4)
13-13
: Good cleanup: Removed unused DataSourceId import.The import simplification removes unused
DataSourceId
while retaining the necessaryDataSourceLike
, which aligns with the refactor to use dataset IDs instead of composite data source identifiers.
53-53
: Cache type simplification improves consistency.Changing the cache value type from
DataSourceId
toString
simplifies the caching mechanism and aligns with the unified dataset identification approach using dataset IDs.
71-75
: Method signature and endpoint updates are consistent.The refactor from separate
organizationId
anddatasetDirectoryName
parameters to a singledatasetId
parameter, along with the corresponding RPC endpoint path update, properly implements the unified dataset identification approach.
77-86
: Method rename and type changes align with refactor goals.The rename from
getDataSourceIdForAnnotation
togetDatasetIdForAnnotation
with the return type change fromFox[DataSourceId]
toFox[String]
consistently implements the shift to dataset ID-based identification throughout the system.webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/tracings/RemoteFallbackLayer.scala (3)
18-18
: Case class simplification improves the data model.The refactor from separate
organizationId
anddatasetDirectoryName
fields to a singledatasetId
string, alongsidelayerName
andelementClass
, aligns with the unified dataset identification approach and simplifies the data structure.
21-24
: Method rename and parameter updates are consistent.The rename from
fromDataLayerAndDataSource
tofromDataLayerAndDatasetId
with the parameter change fromDataSourceId
todatasetId
string properly implements the shift to dataset ID-based identification throughout the system.
37-38
: FallbackDataHelper updates maintain functionality.The update to use
getDatasetIdForAnnotation
and constructRemoteFallbackLayer
with the unifieddatasetId
maintains the same functionality while adopting the new dataset identification scheme.webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/DSRemoteWebknossosClient.scala (4)
53-58
: New case class is well-structured.The
DataSourceRegistrationInfo
case class properly encapsulatesdataSource
andfolderId
with appropriate JSON formatting, supporting the new dataset registration workflow.
130-133
: Method parameter update aligns with unified identification.The
fetchPaths
method update to acceptdatasetId: ObjectId
instead of composite identifiers, with corresponding RPC URL construction, properly implements the dataset ID-based approach.
144-162
: New dataset management methods are well-implemented.The new
registerDataSource
andupdateDataSource
methods properly implement dataset ID-based RPC calls with appropriate token context handling and consistent URL construction patterns.
168-172
: Virtual dataset deletion method follows consistent pattern.The
deleteVirtualDataset
method properly implements the deletion workflow using dataset ID with appropriate RPC call structure and token handling.app/models/dataset/WKRemoteDataStoreClient.scala (3)
24-25
: Cache key simplification improves efficiency.The cache key change from
(String, String, String)
to(String, String)
removes the organization component, simplifying the caching mechanism while maintaining the necessary(datasetId, layerName)
identification.
37-37
: RPC URL construction consistently uses dataset ID.All dataset-related RPC calls have been properly updated to use
dataset._id
in URL construction instead of separate organization and dataset name components, implementing the unified dataset identification approach throughout the client.Also applies to: 60-60, 67-67, 85-85, 102-102
80-90
: Method parameter update maintains functionality.The
hasSegmentIndexFile
method update to acceptdatasetId: String
instead of separate organization and dataset parameters, with corresponding cache key and RPC URL updates, properly implements the unified dataset identification while preserving the caching behavior.frontend/javascripts/admin/api/mesh.ts (2)
29-66
: LGTM! Correctly implements the dataset ID migration.The parameter change from composite identifiers to a single
datasetId
string and the corresponding URL update properly align with the system-wide migration to unified dataset identification.
79-87
: LGTM! Consistent with the dataset ID unification.The changes mirror those in
getMeshfileChunksForSegment
, maintaining consistency in the API.webknossos-datastore/app/com/scalableminds/webknossos/datastore/helpers/DatasetDeleter.scala (3)
25-31
: Good addition for checking dataset existence on disk.The
existsOnDisk
method properly handles both regular and conversion directory paths, which is useful for the virtual dataset implementation.
85-89
: Clean refactoring to use dataset ID directly.The changes to
moveSymlinks
properly align with the dataset ID unification by using the ID directly for fetching paths instead of constructing a DataSourceId.
76-76
: Symlink handling for dataset deletion is correctAll regular deletion flows supply a defined
datasetId
, somoveSymlinks
will run where needed, and virtual or in-conversion datasets legitimately skip symlink operations:
- In DataSourceController.deleteOnDisk, calls use
Some(datasetId)
, invokingmoveSymlinks
for real datasets- UploadService.deleteOnDisk passes
None
only on failed uploads (no symlinks to move)- WKRemoteDataStoreController only allows deletion of virtual datasets and never provides an ID, so symlinks are rightly untouched
No changes required.
app/models/dataset/Dataset.scala (4)
65-65
: Proper implementation of the isVirtual field.The addition of the
isVirtual
boolean field is correctly integrated throughout the Dataset model, including parsing, insertion, and the case class definition.Also applies to: 149-149, 625-625, 634-634
522-530
: LGTM! Useful method for retrieving virtual datasets.The
getVirtualDatasetIds
method provides a clean way to query all virtual datasets, which will be helpful for management and maintenance tasks.
693-693
: Correct exclusion of virtual datasets from deactivation.Virtual datasets should not be deactivated when unreported by the datastore since they don't have physical storage.
777-779
: Path column migration confirmedVerified that
conf/evolutions/126-mag-real-paths.sql
includes:ALTER TABLE webknossos.dataset_mags ADD COLUMN path TEXT;and
tools/postgres/schema.sql
definespath TEXT
onwebknossos.dataset_mags
. No further action needed.webknossos-datastore/conf/datastore.versioned.routes (3)
3-3
: Good API versioning strategy.Adding v10 for the new dataset ID-based routes while preserving v9 maintains backward compatibility.
6-44
: Comprehensive legacy route preservation.The explicit definition of all v9 routes mapped to the LegacyController ensures backward compatibility for existing integrations. The routes properly maintain the old parameter structure with organizationId and datasetDirectoryName.
16-17
: Confirm intentional unversioned legacy routesI didn’t find any other unversioned
/datasets/...
entries inwebknossos-datastore/conf/datastore.versioned.routes
. The only two routes missing the/v9/
prefix are:
- Line 16:
POST /datasets/:organizationId/:datasetDirectoryName/layers/:dataLayerName/adHocMesh @LegacyController.requestAdHocMesh(...)
- Line 17:
GET /datasets/:organizationId/:datasetDirectoryName/layers/:dataLayerName/mappings/:mappingName @LegacyController.mappingJson(...)
Please verify whether these should remain unversioned for backward compatibility. If not, update them to include the
/v9/
prefix for consistency.app/models/dataset/ComposeService.scala (2)
40-53
: Well-structured dataset composition implementation.The
composeDataset
method properly validates composability, creates the datasource, and generates a virtual dataset. Good use of the Fox monad for error handling.
63-67
: Correct handling of coordinate transformations.The transformation concatenation logic properly appends new transformations to existing ones, preserving the transformation chain.
webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/mesh/DSFullMeshService.scala (1)
55-62
: LGTM! Clean refactoring to pass objects directly.The signature changes from separate identifiers to DataSource and DataLayer objects improve performance by eliminating redundant repository lookups.
webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/TSRemoteDatastoreClient.scala (1)
39-39
: Cache key type simplified correctly.The change from composite tuple keys to single dataset ID strings aligns with the unified identification scheme.
app/controllers/WKRemoteDataStoreController.scala (2)
245-255
: Virtual dataset deletion endpoint implemented correctly.The endpoint properly validates the dataset ID format, checks that the dataset exists and is virtual before marking it as deleted. Good error handling with appropriate HTTP status codes.
286-309
: Virtual dataset registration endpoint well-implemented.Good security with user token validation, organization membership check, and folder access verification. The endpoint properly creates virtual datasets through the DatasetService.
app/models/dataset/DatasetService.scala (1)
103-124
: Virtual dataset creation method well-structured.Excellent validation flow including dataset name validation, uniqueness check, folder access control, and proper user association. The implementation follows established patterns.
conf/webknossos.latest.routes (2)
96-96
: Dataset composition route added correctly.The new
/datasets/compose
endpoint enables virtual dataset creation from multiple datasets as intended.
112-115
: Dataset management routes properly updated.Routes correctly transitioned to use
ObjectId
dataset IDs and new virtual dataset endpoints added with appropriate paths.Also applies to: 121-121
webknossos-datastore/app/com/scalableminds/webknossos/datastore/controllers/DSMeshController.scala (2)
4-4
: Consistent refactoring to unified dataset identificationThe refactoring from composite identifiers (organizationId + datasetDirectoryName) to a single ObjectId is well-executed throughout the controller. All methods follow a consistent pattern of using
datasetCache.getWithLayer
for data retrieval andUserAccessRequest.readDataset(datasetId)
for access control.Also applies to: 23-23, 33-96
93-93
: Improved service method signatureGood architectural improvement - passing
DataSource
andDataLayer
objects directly tofullMeshService.loadFor
eliminates redundant repository lookups within the service.webknossos-datastore/app/com/scalableminds/webknossos/datastore/dataformats/layers/WKWDataLayers.scala (3)
23-23
: Trait inheritance change aligns with unified magnification handlingThe change from
DataLayer
toDataLayerWithMagLocators
properly integrates WKW layers with the unified magnification system.
60-97
: Well-designed backward-compatible JSON serializationThe custom JSON formats elegantly handle the migration from
wkwResolutions
tomags
by:
- Attempting to read the new
mags
field first- Falling back to legacy
wkwResolutions
with automatic conversion- Providing clear error messages when neither field is present
This ensures smooth migration without breaking existing clients.
Also applies to: 119-158
34-34
: defaultCubeSize is only a legacy fallback; real WKW cubeLength comes from the DBI’ve confirmed that for WKW datasets the
wkwResolutionsOpt
field is populated from thedataset_mags
table (via theDatasetMagsDAO.updateMags
logic), and thatWKWDataLayers
uses those database-providedcubeLength
values. The hardcodeddefaultCubeSize = 1024
only applies in the final fallback branch when nowkwResolutionsOpt
exists (i.e. legacy data). No change is required.webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/ZarrStreamingService.scala (4)
1-29
: Well-structured Zarr streaming serviceGood architectural decision to extract Zarr-specific logic into a dedicated service. The service properly:
- Supports both Zarr v2 and v3 formats
- Centralizes format conversion and header generation
- Uses appropriate dependency injection
63-112
: Correct Zarr layer conversion implementationThe layer conversion properly:
- Constructs Zarr-compliant paths
- Sets appropriate axis ordering (c + additional + xyz)
- Handles the uint24 special case for RGB data (3 channels)
- Maintains all layer properties during conversion
114-146
: Robust Zarr cube data retrievalThe implementation correctly:
- Parses multi-dimensional coordinates with additional axes support
- Validates magnification values
- Properly scales coordinates by cube size and magnification
- Handles missing chunks with appropriate error messages
170-223
: Comprehensive directory listing implementationsThe directory listing methods correctly:
- Return version-appropriate metadata files (.zattrs/.zgroup for v2, zarr.json for v3)
- Handle all directory levels (dataset, layer, magnification)
- Support private link access with proper layer filtering
webknossos-datastore/app/com/scalableminds/webknossos/datastore/controllers/BinaryDataController.scala (1)
7-7
: Consistent dataset ID refactoring across all endpointsAll controller methods have been properly refactored to:
- Accept
datasetId: ObjectId
instead of organizationId/datasetDirectoryName pairs- Use
datasetCache
for data retrieval- Update access control to
UserAccessRequest.readDataset(datasetId)
The refactoring maintains all existing functionality while simplifying the API.
Also applies to: 36-36, 55-273
webknossos-datastore/app/com/scalableminds/webknossos/datastore/controllers/ZarrStreamingController.scala (2)
5-5
: Excellent controller simplification through service delegationThe controller refactoring effectively:
- Delegates all Zarr-specific logic to
ZarrStreamingService
- Removes duplicate code and complex helper methods
- Maintains clean separation between access control and business logic
- Preserves all existing functionality for both direct and private link access
Also applies to: 23-28, 44-126
301-315
: Well-structured private link access handlingThe
ifIsAnnotationLayerOrElse
helper method elegantly handles the distinction between annotation layers (served from tracingstore) and data layers (served from datastore), maintaining proper access control through token contexts.webknossos-datastore/app/com/scalableminds/webknossos/datastore/controllers/LegacyController.scala (2)
90-114
: Legacy pattern correctly implementedThe method properly uses the legacy pattern with
organizationId
anddatasetDirectoryName
parameters, which aligns with the controller's purpose of maintaining backward compatibility.
539-547
: Verify the use of synchronous validationThis method uses
validateAccessFromTokenContextForSyncBlock
while all other methods use the asyncvalidateAccessFromTokenContext
. Please verify this is intentional and document why synchronous validation is needed here.frontend/javascripts/admin/rest_api.ts (2)
841-851
: Dataset ID migration correctly implementedThe function properly migrates from separate organization/dataset parameters to a unified dataset ID approach, maintaining consistency with the backend changes.
1347-1357
: Parameter type matches usageThe function correctly uses
dataset.id
from the dataset object parameter. The implementation is consistent with the dataset ID migration.webknossos-datastore/app/com/scalableminds/webknossos/datastore/controllers/DataSourceController.scala (10)
5-5
: LGTM! Consistent refactoring to use dataset ID.The changes properly implement the transition from DataSource IDs to dataset IDs, with appropriate access control updates and cache utilization.
Also applies to: 68-74
244-257
: Transitional implementation looks good.The method correctly bridges the new dataset ID approach with the existing mapping exploration logic. The comment acknowledges this is temporary until mappings support dataset IDs directly.
259-269
: Efficient implementation usinggetWithLayer
.Good use of the cache method that retrieves both data source and layer atomically, reducing database calls.
350-364
: Good handling of hybrid storage model.The implementation correctly handles both disk-based and virtual datasets during the transition period. The comment properly documents this temporary dual-path approach.
366-378
: Clean refactoring to support virtual datasets.The method has been appropriately simplified to delegate dataset registration to the remote client, returning the newly created dataset ID.
409-427
: Performance improvement by eliminating redundant disk reads.Good refactoring that accepts the data source as a parameter instead of reloading it from disk, reducing unnecessary I/O operations.
429-439
: Consistent use of cache infrastructure.The reload method properly leverages the dataset cache for both invalidation and reloading, aligning with the virtual dataset architecture.
441-461
: Well-structured deletion logic for hybrid storage.The method correctly handles deletion for both disk-based and virtual datasets, with appropriate permission checks and cleanup operations.
463-471
: Consistent refactoring across all segment and connectome methods.All methods have been uniformly updated to use dataset IDs, proper cache access, and updated permissions. The refactoring maintains consistency throughout.
Also applies to: 473-484, 486-504, 506-517, 519-530, 532-540, 547-576, 583-609, 611-631, 633-651
685-690
: Useful cache management endpoint.Good addition of a dedicated cache invalidation endpoint with appropriate write permissions.
...sos-tracingstore/app/com/scalableminds/webknossos/tracingstore/TSRemoteDatastoreClient.scala
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (9)
docs/data/concepts.md (1)
125-127
: Fix markdown list formatting inconsistencies.The static analysis tool flagged inconsistent list bullet styles. For better markdown compliance, consider using consistent dash (
-
) style bullets throughout:- `dataLayers.mags`: Holds information about the available magnification steps of the layer. - * `dataLayers.mags.mag`: A 3-tuple (e.g., `[1, 1, 1]`, `[2, 2, 1]`) for uniform or non-uniform magnifications. - * `dataLayers.mags.path`: The path to the directory containing the data for this magnification step. + `dataLayers.mags`: Holds information about the available magnification steps of the layer. + - `dataLayers.mags.mag`: A 3-tuple (e.g., `[1, 1, 1]`, `[2, 2, 1]`) for uniform or non-uniform magnifications. + - `dataLayers.mags.path`: The path to the directory containing the data for this magnification step.webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/tracings/RemoteFallbackLayer.scala (1)
11-11
: Consider removing unused ObjectId import.The
ObjectId
import appears to be unused in this file. If it's not needed for compilation or used in code not shown, consider removing it to keep imports clean.#!/bin/bash # Description: Check if ObjectId is used in this file # Expected: If not used, the import can be removed # Search for ObjectId usage in the file rg "ObjectId" webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/tracings/RemoteFallbackLayer.scalawebknossos-datastore/app/com/scalableminds/webknossos/datastore/dataformats/layers/WKWDataLayers.scala (1)
23-40
: Good trait refactoring with a minor concernThe extension of
DataLayerWithMagLocators
improves the abstraction. However, the hardcodeddefaultCubeSize = 1024
might be limiting for different use cases.Consider making the cube size configurable:
- def defaultCubeSize = 1024 + def defaultCubeSize: Int = 1024 // Can be overridden if neededapp/models/dataset/ComposeService.scala (2)
40-53
: Consider improving error messagingThe implementation is correct, but the error message could be more helpful by indicating which datasets are on different data stores.
- _ <- isComposable(composeRequest) ?~> "Datasets are not composable, they are not on the same data store" + _ <- isComposable(composeRequest) ?~> "dataset.compose.differentDataStores"
55-76
: Consider more descriptive error messageThe implementation correctly handles layer transformation and validation.
- case _ => Fox.failure("Unsupported layer type for composition: " + layer.getClass.getSimpleName) + case _ => Fox.failure(s"Unsupported layer type for composition: ${layer.getClass.getSimpleName}. Only DataLayerWithMagLocators is supported.")webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/ZarrStreamingService.scala (2)
114-146
: Remove unnecessary Fox.successful() call.Line 121 creates an empty successful Fox that serves no purpose.
def rawZarrCube( dataSource: DataSource, dataLayer: DataLayer, mag: String, coordinates: String )(implicit m: MessagesProvider, tc: TokenContext): Fox[Array[Byte]] = for { - _ <- Fox.successful(()) reorderedAdditionalAxes = dataLayer.additionalAxes.map(reorderAdditionalAxes)
170-207
: Remove unnecessary Fox.successful() calls in directory methods.Multiple methods contain unnecessary
Fox.successful(())
calls that can be removed.def dataLayerDirectoryContents( dataSource: DataSource, dataLayer: DataLayer, zarrVersion: Int ): Fox[List[String]] = for { - _ <- Fox.successful(()) mags = dataLayer.sortedMags def dataSourceDirectoryContents( dataSource: DataSource, zarrVersion: Int ): Fox[List[String]] = for { - _ <- Fox.successful(()) layerNames = dataSource.dataLayers.map((dataLayer: DataLayer) => dataLayer.name)webknossos-datastore/app/com/scalableminds/webknossos/datastore/controllers/LegacyController.scala (1)
90-356
: Legacy binary data methods correctly preserve old API behavior.The methods maintain the original API signatures and use
dataSourceRepository
for backward compatibility. While there's code duplication withBinaryDataController
, this is acceptable for a legacy compatibility layer.Consider adding deprecation notices in the documentation to encourage migration to the new dataset ID-based endpoints.
webknossos-datastore/app/com/scalableminds/webknossos/datastore/controllers/DataSourceController.scala (1)
366-378
: Significant architectural change: dataset registration moved to remote service.The
add
method now registers datasets remotely rather than creating them on disk, aligning with the virtual dataset architecture. This is a breaking change from the previous disk-based approach.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
test/db/dataSets.csv
is excluded by!**/*.csv
📒 Files selected for processing (59)
app/controllers/AnnotationIOController.scala
(2 hunks)app/controllers/DatasetController.scala
(3 hunks)app/controllers/UserTokenController.scala
(1 hunks)app/controllers/WKRemoteDataStoreController.scala
(3 hunks)app/controllers/WKRemoteTracingStoreController.scala
(2 hunks)app/models/annotation/AnnotationService.scala
(4 hunks)app/models/dataset/ComposeService.scala
(1 hunks)app/models/dataset/Dataset.scala
(7 hunks)app/models/dataset/DatasetService.scala
(10 hunks)app/models/dataset/WKRemoteDataStoreClient.scala
(5 hunks)app/models/dataset/explore/WKExploreRemoteLayerService.scala
(1 hunks)conf/evolutions/136-virtual-datasets.sql
(1 hunks)conf/evolutions/reversions/136-virtual-datasets.sql
(1 hunks)conf/webknossos.latest.routes
(3 hunks)conf/webknossos.versioned.routes
(2 hunks)docs/data/concepts.md
(6 hunks)frontend/javascripts/admin/api/mesh.ts
(4 hunks)frontend/javascripts/admin/dataset/composition_wizard/04_configure_new_dataset.tsx
(1 hunks)frontend/javascripts/admin/rest_api.ts
(20 hunks)frontend/javascripts/dashboard/dataset/dataset_settings_data_tab.tsx
(0 hunks)frontend/javascripts/dashboard/dataset/dataset_settings_delete_tab.tsx
(1 hunks)frontend/javascripts/dashboard/dataset/dataset_settings_view.tsx
(2 hunks)frontend/javascripts/dashboard/dataset/dataset_settings_viewconfig_tab.tsx
(3 hunks)frontend/javascripts/test/backend-snapshot-tests/datasets.e2e.ts
(3 hunks)frontend/javascripts/test/model/binary/layers/wkstore_adapter.spec.ts
(5 hunks)frontend/javascripts/types/schemas/datasource.schema.ts
(2 hunks)frontend/javascripts/types/schemas/datasource.types.ts
(1 hunks)frontend/javascripts/viewer/model/bucket_data_handling/wkstore_adapter.ts
(1 hunks)frontend/javascripts/viewer/model/sagas/load_histogram_data_saga.ts
(1 hunks)frontend/javascripts/viewer/model/sagas/meshes/precomputed_mesh_saga.ts
(2 hunks)frontend/javascripts/viewer/view/right-border-tabs/connectome_tab/connectome_view.tsx
(2 hunks)frontend/javascripts/viewer/view/right-border-tabs/segments_tab/segments_view_helper.tsx
(1 hunks)tools/postgres/schema.sql
(2 hunks)unreleased_changes/8708.md
(1 hunks)webknossos-datastore/app/com/scalableminds/webknossos/datastore/DataStoreModule.scala
(1 hunks)webknossos-datastore/app/com/scalableminds/webknossos/datastore/controllers/BinaryDataController.scala
(10 hunks)webknossos-datastore/app/com/scalableminds/webknossos/datastore/controllers/DSMeshController.scala
(6 hunks)webknossos-datastore/app/com/scalableminds/webknossos/datastore/controllers/DataSourceController.scala
(18 hunks)webknossos-datastore/app/com/scalableminds/webknossos/datastore/controllers/LegacyController.scala
(1 hunks)webknossos-datastore/app/com/scalableminds/webknossos/datastore/controllers/WKDatasetController.scala
(0 hunks)webknossos-datastore/app/com/scalableminds/webknossos/datastore/controllers/ZarrStreamingController.scala
(12 hunks)webknossos-datastore/app/com/scalableminds/webknossos/datastore/dataformats/layers/WKWDataLayers.scala
(5 hunks)webknossos-datastore/app/com/scalableminds/webknossos/datastore/helpers/DatasetDeleter.scala
(4 hunks)webknossos-datastore/app/com/scalableminds/webknossos/datastore/models/datasource/DataLayer.scala
(2 hunks)webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/AccessTokenService.scala
(1 hunks)webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/DSRemoteWebknossosClient.scala
(4 hunks)webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/DataSourceService.scala
(1 hunks)webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/DatasetCache.scala
(1 hunks)webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/ZarrStreamingService.scala
(1 hunks)webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/mesh/DSFullMeshService.scala
(3 hunks)webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/uploading/ComposeService.scala
(0 hunks)webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/uploading/UploadService.scala
(2 hunks)webknossos-datastore/conf/datastore.latest.routes
(4 hunks)webknossos-datastore/conf/datastore.versioned.routes
(1 hunks)webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/TSRemoteDatastoreClient.scala
(2 hunks)webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/TSRemoteWebknossosClient.scala
(3 hunks)webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/tracings/RemoteFallbackLayer.scala
(2 hunks)webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/tracings/volume/VolumeTracingService.scala
(1 hunks)webknossos-tracingstore/conf/tracingstore.versioned.routes
(1 hunks)
💤 Files with no reviewable changes (3)
- frontend/javascripts/dashboard/dataset/dataset_settings_data_tab.tsx
- webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/uploading/ComposeService.scala
- webknossos-datastore/app/com/scalableminds/webknossos/datastore/controllers/WKDatasetController.scala
🧰 Additional context used
🧠 Learnings (47)
frontend/javascripts/viewer/model/sagas/load_histogram_data_saga.ts (3)
Learnt from: dieknolle3333
PR: #8168
File: frontend/javascripts/oxalis/model/sagas/proofread_saga.ts:1039-1039
Timestamp: 2024-11-22T17:18:04.217Z
Learning: In frontend/javascripts/oxalis/model/sagas/proofread_saga.ts
, when calling getMagInfo
, the use of volumeTracingLayer.resolutions
is intentional and should not be changed to volumeTracingLayer.mags
.
Learnt from: dieknolle3333
PR: #8229
File: frontend/javascripts/oxalis/model/accessors/dataset_accessor.ts:348-354
Timestamp: 2024-11-25T14:38:49.345Z
Learning: For the getDatasetExtentAsProduct
function in dataset_accessor.ts
, input validation for negative or zero dimensions is not necessary.
Learnt from: philippotto
PR: #8602
File: frontend/javascripts/oxalis/model/volumetracing/volume_annotation_sampling.ts:365-366
Timestamp: 2025-05-07T06:17:32.810Z
Learning: The parameter in applyVoxelMap was renamed from sliceCount
to sliceOffset
to better reflect its purpose, but this doesn't affect existing call sites since JavaScript/TypeScript function calls are position-based.
frontend/javascripts/viewer/view/right-border-tabs/segments_tab/segments_view_helper.tsx (4)
Learnt from: dieknolle3333
PR: #8168
File: frontend/javascripts/oxalis/model/sagas/proofread_saga.ts:1039-1039
Timestamp: 2024-11-22T17:18:04.217Z
Learning: In frontend/javascripts/oxalis/model/sagas/proofread_saga.ts
, when calling getMagInfo
, the use of volumeTracingLayer.resolutions
is intentional and should not be changed to volumeTracingLayer.mags
.
Learnt from: dieknolle3333
PR: #8229
File: frontend/javascripts/oxalis/model/accessors/dataset_accessor.ts:348-354
Timestamp: 2024-11-25T14:38:49.345Z
Learning: For the getDatasetExtentAsProduct
function in dataset_accessor.ts
, input validation for negative or zero dimensions is not necessary.
Learnt from: philippotto
PR: #8602
File: frontend/javascripts/oxalis/model/volumetracing/volume_annotation_sampling.ts:365-366
Timestamp: 2025-05-07T06:17:32.810Z
Learning: The parameter in applyVoxelMap was renamed from sliceCount
to sliceOffset
to better reflect its purpose, but this doesn't affect existing call sites since JavaScript/TypeScript function calls are position-based.
Learnt from: MichaelBuessemeyer
PR: #8485
File: frontend/javascripts/oxalis/model/accessors/dataset_layer_transformation_accessor.ts:384-392
Timestamp: 2025-04-01T09:45:17.527Z
Learning: The function isRotationAndMirrorMaybeOnly
in the dataset_layer_transformation_accessor.ts is intentionally designed to allow mirroring transformations (negative scale values). It uses the length comparison (scale.length() === NON_SCALED_VECTOR.length()
) rather than component equality to permit mirrored axes while ensuring the overall scale magnitude remains the same.
frontend/javascripts/dashboard/dataset/dataset_settings_delete_tab.tsx (1)
Learnt from: dieknolle3333
PR: #8229
File: frontend/javascripts/oxalis/model/accessors/dataset_accessor.ts:348-354
Timestamp: 2024-11-25T14:38:49.345Z
Learning: For the getDatasetExtentAsProduct
function in dataset_accessor.ts
, input validation for negative or zero dimensions is not necessary.
frontend/javascripts/test/model/binary/layers/wkstore_adapter.spec.ts (1)
Learnt from: frcroth
PR: #8609
File: app/models/dataset/Dataset.scala:753-775
Timestamp: 2025-05-12T13:07:29.637Z
Learning: In the updateMags
method of DatasetMagsDAO (Scala), the code handles different dataset types distinctly:
- Non-WKW datasets have
magsOpt
populated and use the first branch which includes axisOrder, channelIndex, and credentialId. - WKW datasets will have
wkwResolutionsOpt
populated and use the second branch which includes cubeLength. - The final branch is a fallback for legacy data.
This ensures appropriate fields are populated for each dataset type.
frontend/javascripts/viewer/model/bucket_data_handling/wkstore_adapter.ts (1)
Learnt from: frcroth
PR: #8609
File: app/models/dataset/Dataset.scala:753-775
Timestamp: 2025-05-12T13:07:29.637Z
Learning: In the updateMags
method of DatasetMagsDAO (Scala), the code handles different dataset types distinctly:
- Non-WKW datasets have
magsOpt
populated and use the first branch which includes axisOrder, channelIndex, and credentialId. - WKW datasets will have
wkwResolutionsOpt
populated and use the second branch which includes cubeLength. - The final branch is a fallback for legacy data.
This ensures appropriate fields are populated for each dataset type.
conf/webknossos.versioned.routes (2)
Learnt from: frcroth
PR: #8609
File: app/models/dataset/Dataset.scala:753-775
Timestamp: 2025-05-12T13:07:29.637Z
Learning: In the updateMags
method of DatasetMagsDAO (Scala), the code handles different dataset types distinctly:
- Non-WKW datasets have
magsOpt
populated and use the first branch which includes axisOrder, channelIndex, and credentialId. - WKW datasets will have
wkwResolutionsOpt
populated and use the second branch which includes cubeLength. - The final branch is a fallback for legacy data.
This ensures appropriate fields are populated for each dataset type.
Learnt from: frcroth
PR: #8202
File: webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/DatasetErrorLoggingService.scala:68-72
Timestamp: 2024-11-25T10:02:03.702Z
Learning: In DatasetErrorLoggingService.scala
, prefer using TextUtils.stackTraceAsString(exception)
when logging exceptions instead of passing the exception directly to logger.error
.
frontend/javascripts/viewer/model/sagas/meshes/precomputed_mesh_saga.ts (5)
Learnt from: dieknolle3333
PR: #8168
File: frontend/javascripts/oxalis/model/sagas/proofread_saga.ts:1039-1039
Timestamp: 2024-11-22T17:18:04.217Z
Learning: In frontend/javascripts/oxalis/model/sagas/proofread_saga.ts
, when calling getMagInfo
, the use of volumeTracingLayer.resolutions
is intentional and should not be changed to volumeTracingLayer.mags
.
Learnt from: philippotto
PR: #8602
File: frontend/javascripts/oxalis/model/volumetracing/volume_annotation_sampling.ts:365-366
Timestamp: 2025-05-07T06:17:32.810Z
Learning: The parameter in applyVoxelMap was renamed from sliceCount
to sliceOffset
to better reflect its purpose, but this doesn't affect existing call sites since JavaScript/TypeScript function calls are position-based.
Learnt from: dieknolle3333
PR: #8229
File: frontend/javascripts/oxalis/model/accessors/dataset_accessor.ts:348-354
Timestamp: 2024-11-25T14:38:49.345Z
Learning: For the getDatasetExtentAsProduct
function in dataset_accessor.ts
, input validation for negative or zero dimensions is not necessary.
Learnt from: dieknolle3333
PR: #8168
File: frontend/javascripts/oxalis/geometries/materials/plane_material_factory.ts:568-585
Timestamp: 2024-11-22T17:18:43.411Z
Learning: In the file frontend/javascripts/oxalis/geometries/materials/plane_material_factory.ts
, the uniform name allResolutions
should remain unchanged to maintain consistency with the shader code.
Learnt from: frcroth
PR: #8236
File: webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/mesh/NeuroglancerMeshHelper.scala:143-152
Timestamp: 2025-04-30T08:54:11.252Z
Learning: In Neuroglancer's sharded format, mesh fragment data is located immediately before the manifest file in the same shard data file. The correct way to calculate the mesh data starting offset is to subtract the total mesh size from the manifest starting offset (neuroglancerOffsetStart - totalMeshSize), not by adding the manifest size.
frontend/javascripts/admin/dataset/composition_wizard/04_configure_new_dataset.tsx (3)
Learnt from: dieknolle3333
PR: #8229
File: frontend/javascripts/oxalis/model/accessors/dataset_accessor.ts:348-354
Timestamp: 2024-11-25T14:38:49.345Z
Learning: For the getDatasetExtentAsProduct
function in dataset_accessor.ts
, input validation for negative or zero dimensions is not necessary.
Learnt from: philippotto
PR: #8602
File: frontend/javascripts/oxalis/model/volumetracing/volume_annotation_sampling.ts:365-366
Timestamp: 2025-05-07T06:17:32.810Z
Learning: The parameter in applyVoxelMap was renamed from sliceCount
to sliceOffset
to better reflect its purpose, but this doesn't affect existing call sites since JavaScript/TypeScript function calls are position-based.
Learnt from: dieknolle3333
PR: #8168
File: frontend/javascripts/oxalis/model/sagas/proofread_saga.ts:1039-1039
Timestamp: 2024-11-22T17:18:04.217Z
Learning: In frontend/javascripts/oxalis/model/sagas/proofread_saga.ts
, when calling getMagInfo
, the use of volumeTracingLayer.resolutions
is intentional and should not be changed to volumeTracingLayer.mags
.
frontend/javascripts/test/backend-snapshot-tests/datasets.e2e.ts (1)
Learnt from: dieknolle3333
PR: #8229
File: frontend/javascripts/oxalis/model/accessors/dataset_accessor.ts:348-354
Timestamp: 2024-11-25T14:38:49.345Z
Learning: For the getDatasetExtentAsProduct
function in dataset_accessor.ts
, input validation for negative or zero dimensions is not necessary.
webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/DatasetCache.scala (1)
Learnt from: frcroth
PR: #8609
File: app/models/dataset/Dataset.scala:753-775
Timestamp: 2025-05-12T13:07:29.637Z
Learning: In the updateMags
method of DatasetMagsDAO (Scala), the code handles different dataset types distinctly:
- Non-WKW datasets have
magsOpt
populated and use the first branch which includes axisOrder, channelIndex, and credentialId. - WKW datasets will have
wkwResolutionsOpt
populated and use the second branch which includes cubeLength. - The final branch is a fallback for legacy data.
This ensures appropriate fields are populated for each dataset type.
webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/AccessTokenService.scala (1)
Learnt from: frcroth
PR: #8609
File: app/models/dataset/Dataset.scala:753-775
Timestamp: 2025-05-12T13:07:29.637Z
Learning: In the updateMags
method of DatasetMagsDAO (Scala), the code handles different dataset types distinctly:
- Non-WKW datasets have
magsOpt
populated and use the first branch which includes axisOrder, channelIndex, and credentialId. - WKW datasets will have
wkwResolutionsOpt
populated and use the second branch which includes cubeLength. - The final branch is a fallback for legacy data.
This ensures appropriate fields are populated for each dataset type.
docs/data/concepts.md (8)
Learnt from: frcroth
PR: #8609
File: conf/evolutions/133-datasource-properties-in-db.sql:8-16
Timestamp: 2025-05-12T14:15:05.259Z
Learning: The database schema in WEBKNOSSOS has separate tables for dataset layers (dataset_layers
) and magnifications (dataset_mags
). The dataFormat
field is stored in the layers table while magnification-specific fields like cubeLength
(specific to WKW format) are stored in the mags table.
Learnt from: dieknolle3333
PR: #8168
File: frontend/javascripts/oxalis/model/sagas/volumetracing_saga.tsx:433-434
Timestamp: 2024-11-22T17:19:07.947Z
Learning: In the codebase, certain usages of segmentationLayer.resolutions
are intentionally retained and should not be changed to segmentationLayer.mags
during refactoring.
Learnt from: dieknolle3333
PR: #8168
File: frontend/javascripts/oxalis/model/sagas/proofread_saga.ts:1039-1039
Timestamp: 2024-11-22T17:18:04.217Z
Learning: In frontend/javascripts/oxalis/model/sagas/proofread_saga.ts
, when calling getMagInfo
, the use of volumeTracingLayer.resolutions
is intentional and should not be changed to volumeTracingLayer.mags
.
Learnt from: frcroth
PR: #8609
File: app/models/dataset/Dataset.scala:753-775
Timestamp: 2025-05-12T13:07:29.637Z
Learning: In the updateMags
method of DatasetMagsDAO (Scala), the code handles different dataset types distinctly:
- Non-WKW datasets have
magsOpt
populated and use the first branch which includes axisOrder, channelIndex, and credentialId. - WKW datasets will have
wkwResolutionsOpt
populated and use the second branch which includes cubeLength. - The final branch is a fallback for legacy data.
This ensures appropriate fields are populated for each dataset type.
Learnt from: MichaelBuessemeyer
PR: #8221
File: CHANGELOG.unreleased.md:14-14
Timestamp: 2024-11-25T14:52:23.201Z
Learning: For the webknossos
project, changelog entries in CHANGELOG.unreleased.md
should be concise and avoid unnecessary detailed explanations.
Learnt from: dieknolle3333
PR: #8168
File: frontend/javascripts/oxalis/model/reducers/skeletontracing_reducer_helpers.ts:125-125
Timestamp: 2024-11-22T17:17:39.914Z
Learning: In the createNode
function within skeletontracing_reducer_helpers.ts
, the property resolution
is intentionally kept even when the parameter is renamed to mag
.
Learnt from: dieknolle3333
PR: #8168
File: frontend/javascripts/oxalis/geometries/materials/plane_material_factory.ts:568-585
Timestamp: 2024-11-22T17:18:43.411Z
Learning: In the file frontend/javascripts/oxalis/geometries/materials/plane_material_factory.ts
, the uniform name allResolutions
should remain unchanged to maintain consistency with the shader code.
Learnt from: frcroth
PR: #8598
File: webknossos-datastore/app/com/scalableminds/webknossos/datastore/models/datasource/DatasetLayerAttachments.scala:89-95
Timestamp: 2025-06-02T09:49:51.047Z
Learning: In WebKnossos dataset layer attachments, multiple file types can safely use the same directory name (like "agglomerates") because the scanning logic filters by file extension. For example, AgglomerateFileInfo scans for .hdf5 files while CumsumFileInfo scans for .json files in the same "agglomerates" directory without interference.
frontend/javascripts/types/schemas/datasource.types.ts (1)
Learnt from: dieknolle3333
PR: #8168
File: frontend/javascripts/oxalis/geometries/materials/plane_material_factory.ts:568-585
Timestamp: 2024-11-22T17:18:43.411Z
Learning: In the file frontend/javascripts/oxalis/geometries/materials/plane_material_factory.ts
, the uniform name allResolutions
should remain unchanged to maintain consistency with the shader code.
webknossos-datastore/app/com/scalableminds/webknossos/datastore/models/datasource/DataLayer.scala (3)
Learnt from: frcroth
PR: #8609
File: app/models/dataset/Dataset.scala:753-775
Timestamp: 2025-05-12T13:07:29.637Z
Learning: In the updateMags
method of DatasetMagsDAO (Scala), the code handles different dataset types distinctly:
- Non-WKW datasets have
magsOpt
populated and use the first branch which includes axisOrder, channelIndex, and credentialId. - WKW datasets will have
wkwResolutionsOpt
populated and use the second branch which includes cubeLength. - The final branch is a fallback for legacy data.
This ensures appropriate fields are populated for each dataset type.
Learnt from: frcroth
PR: #8609
File: conf/evolutions/133-datasource-properties-in-db.sql:8-16
Timestamp: 2025-05-12T14:15:05.259Z
Learning: The database schema in WEBKNOSSOS has separate tables for dataset layers (dataset_layers
) and magnifications (dataset_mags
). The dataFormat
field is stored in the layers table while magnification-specific fields like cubeLength
(specific to WKW format) are stored in the mags table.
Learnt from: dieknolle3333
PR: #8168
File: frontend/javascripts/oxalis/model/sagas/volumetracing_saga.tsx:433-434
Timestamp: 2024-11-22T17:19:07.947Z
Learning: In the codebase, certain usages of segmentationLayer.resolutions
are intentionally retained and should not be changed to segmentationLayer.mags
during refactoring.
tools/postgres/schema.sql (1)
Learnt from: frcroth
PR: #8609
File: conf/evolutions/133-datasource-properties-in-db.sql:8-16
Timestamp: 2025-05-12T14:15:05.259Z
Learning: The database schema in WEBKNOSSOS has separate tables for dataset layers (dataset_layers
) and magnifications (dataset_mags
). The dataFormat
field is stored in the layers table while magnification-specific fields like cubeLength
(specific to WKW format) are stored in the mags table.
frontend/javascripts/viewer/view/right-border-tabs/connectome_tab/connectome_view.tsx (1)
Learnt from: philippotto
PR: #8542
File: frontend/javascripts/viewer/model/sagas/volumetracing_saga.tsx:0-0
Timestamp: 2025-05-30T12:36:09.930Z
Learning: The WebKnossos frontend follows Redux immutability principles where reducers act immutably and create new state objects when changes occur. This makes reference equality checks (like prevVolumeTracing.segments !== volumeTracing.segments
) reliable and efficient for detecting actual state changes, allowing memoization to work correctly.
frontend/javascripts/dashboard/dataset/dataset_settings_viewconfig_tab.tsx (1)
Learnt from: dieknolle3333
PR: #8229
File: frontend/javascripts/oxalis/model/accessors/dataset_accessor.ts:348-354
Timestamp: 2024-11-25T14:38:49.345Z
Learning: For the getDatasetExtentAsProduct
function in dataset_accessor.ts
, input validation for negative or zero dimensions is not necessary.
frontend/javascripts/dashboard/dataset/dataset_settings_view.tsx (1)
Learnt from: frcroth
PR: #8609
File: app/models/dataset/Dataset.scala:753-775
Timestamp: 2025-05-12T13:07:29.637Z
Learning: In the updateMags
method of DatasetMagsDAO (Scala), the code handles different dataset types distinctly:
- Non-WKW datasets have
magsOpt
populated and use the first branch which includes axisOrder, channelIndex, and credentialId. - WKW datasets will have
wkwResolutionsOpt
populated and use the second branch which includes cubeLength. - The final branch is a fallback for legacy data.
This ensures appropriate fields are populated for each dataset type.
app/controllers/WKRemoteTracingStoreController.scala (1)
Learnt from: frcroth
PR: #8609
File: app/models/dataset/Dataset.scala:753-775
Timestamp: 2025-05-12T13:07:29.637Z
Learning: In the updateMags
method of DatasetMagsDAO (Scala), the code handles different dataset types distinctly:
- Non-WKW datasets have
magsOpt
populated and use the first branch which includes axisOrder, channelIndex, and credentialId. - WKW datasets will have
wkwResolutionsOpt
populated and use the second branch which includes cubeLength. - The final branch is a fallback for legacy data.
This ensures appropriate fields are populated for each dataset type.
app/controllers/UserTokenController.scala (4)
Learnt from: frcroth
PR: #8609
File: app/models/dataset/Dataset.scala:753-775
Timestamp: 2025-05-12T13:07:29.637Z
Learning: In the updateMags
method of DatasetMagsDAO (Scala), the code handles different dataset types distinctly:
- Non-WKW datasets have
magsOpt
populated and use the first branch which includes axisOrder, channelIndex, and credentialId. - WKW datasets will have
wkwResolutionsOpt
populated and use the second branch which includes cubeLength. - The final branch is a fallback for legacy data.
This ensures appropriate fields are populated for each dataset type.
Learnt from: frcroth
PR: #8236
File: webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/mesh/MeshFileService.scala:170-173
Timestamp: 2025-04-23T08:51:57.756Z
Learning: In the webknossos codebase, classes extending FoxImplicits
have access to an implicit conversion from Option[A]
to Fox[A]
, where None
is converted to an empty Fox that fails gracefully in for-comprehensions.
Learnt from: MichaelBuessemeyer
PR: #8352
File: app/models/organization/CreditTransactionService.scala:0-0
Timestamp: 2025-01-27T12:06:42.865Z
Learning: In Scala's for-comprehension with Fox (Future-like type), the <-
operator ensures sequential execution. If any step fails, the entire chain short-circuits and returns early, preventing subsequent operations from executing. This makes it safe to perform validation checks before database operations.
Learnt from: frcroth
PR: #8236
File: webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/mesh/NeuroglancerPrecomputedMeshFileService.scala:161-166
Timestamp: 2025-04-28T14:18:04.368Z
Learning: In Scala for-comprehensions with the Fox error handling monad, Fox.fromBool()
expressions should use the <-
binding operator instead of the =
assignment operator to properly propagate error conditions. Using =
will cause validation failures to be silently ignored.
webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/TSRemoteWebknossosClient.scala (1)
Learnt from: MichaelBuessemeyer
PR: #8236
File: webknossos-datastore/app/com/scalableminds/webknossos/datastore/datareaders/precomputed/NeuroglancerPrecomputedShardingUtils.scala:23-28
Timestamp: 2025-04-25T11:06:13.275Z
Learning: AlfuCache in the WebKnossos codebase has default configured TTL (time-to-live) and size limits, providing automatic protection against unbounded memory growth.
app/controllers/AnnotationIOController.scala (1)
Learnt from: frcroth
PR: #8609
File: app/models/dataset/Dataset.scala:753-775
Timestamp: 2025-05-12T13:07:29.637Z
Learning: In the updateMags
method of DatasetMagsDAO (Scala), the code handles different dataset types distinctly:
- Non-WKW datasets have
magsOpt
populated and use the first branch which includes axisOrder, channelIndex, and credentialId. - WKW datasets will have
wkwResolutionsOpt
populated and use the second branch which includes cubeLength. - The final branch is a fallback for legacy data.
This ensures appropriate fields are populated for each dataset type.
frontend/javascripts/types/schemas/datasource.schema.ts (6)
Learnt from: frcroth
PR: #8609
File: conf/evolutions/133-datasource-properties-in-db.sql:8-16
Timestamp: 2025-05-12T14:15:05.259Z
Learning: The database schema in WEBKNOSSOS has separate tables for dataset layers (dataset_layers
) and magnifications (dataset_mags
). The dataFormat
field is stored in the layers table while magnification-specific fields like cubeLength
(specific to WKW format) are stored in the mags table.
Learnt from: frcroth
PR: #8609
File: app/models/dataset/Dataset.scala:753-775
Timestamp: 2025-05-12T13:07:29.637Z
Learning: In the updateMags
method of DatasetMagsDAO (Scala), the code handles different dataset types distinctly:
- Non-WKW datasets have
magsOpt
populated and use the first branch which includes axisOrder, channelIndex, and credentialId. - WKW datasets will have
wkwResolutionsOpt
populated and use the second branch which includes cubeLength. - The final branch is a fallback for legacy data.
This ensures appropriate fields are populated for each dataset type.
Learnt from: dieknolle3333
PR: #8168
File: frontend/javascripts/oxalis/model/sagas/proofread_saga.ts:1039-1039
Timestamp: 2024-11-22T17:18:04.217Z
Learning: In frontend/javascripts/oxalis/model/sagas/proofread_saga.ts
, when calling getMagInfo
, the use of volumeTracingLayer.resolutions
is intentional and should not be changed to volumeTracingLayer.mags
.
Learnt from: dieknolle3333
PR: #8168
File: frontend/javascripts/oxalis/geometries/materials/plane_material_factory.ts:568-585
Timestamp: 2024-11-22T17:18:43.411Z
Learning: In the file frontend/javascripts/oxalis/geometries/materials/plane_material_factory.ts
, the uniform name allResolutions
should remain unchanged to maintain consistency with the shader code.
Learnt from: dieknolle3333
PR: #8168
File: frontend/javascripts/oxalis/model/reducers/skeletontracing_reducer_helpers.ts:125-125
Timestamp: 2024-11-22T17:17:39.914Z
Learning: In the createNode
function within skeletontracing_reducer_helpers.ts
, the property resolution
is intentionally kept even when the parameter is renamed to mag
.
Learnt from: dieknolle3333
PR: #8168
File: frontend/javascripts/oxalis/model/sagas/volumetracing_saga.tsx:433-434
Timestamp: 2024-11-22T17:19:07.947Z
Learning: In the codebase, certain usages of segmentationLayer.resolutions
are intentionally retained and should not be changed to segmentationLayer.mags
during refactoring.
webknossos-datastore/app/com/scalableminds/webknossos/datastore/helpers/DatasetDeleter.scala (3)
Learnt from: frcroth
PR: #8609
File: app/models/dataset/Dataset.scala:753-775
Timestamp: 2025-05-12T13:07:29.637Z
Learning: In the updateMags
method of DatasetMagsDAO (Scala), the code handles different dataset types distinctly:
- Non-WKW datasets have
magsOpt
populated and use the first branch which includes axisOrder, channelIndex, and credentialId. - WKW datasets will have
wkwResolutionsOpt
populated and use the second branch which includes cubeLength. - The final branch is a fallback for legacy data.
This ensures appropriate fields are populated for each dataset type.
Learnt from: frcroth
PR: #8236
File: webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/mesh/MeshFileService.scala:170-173
Timestamp: 2025-04-23T08:51:57.756Z
Learning: In the webknossos codebase, classes extending FoxImplicits
have access to an implicit conversion from Option[A]
to Fox[A]
, where None
is converted to an empty Fox that fails gracefully in for-comprehensions.
Learnt from: MichaelBuessemeyer
PR: #8352
File: app/models/organization/CreditTransactionService.scala:0-0
Timestamp: 2025-01-27T12:06:42.865Z
Learning: In Scala's for-comprehension with Fox (Future-like type), the <-
operator ensures sequential execution. If any step fails, the entire chain short-circuits and returns early, preventing subsequent operations from executing. This makes it safe to perform validation checks before database operations.
app/controllers/DatasetController.scala (1)
Learnt from: frcroth
PR: #8609
File: app/models/dataset/Dataset.scala:753-775
Timestamp: 2025-05-12T13:07:29.637Z
Learning: In the updateMags
method of DatasetMagsDAO (Scala), the code handles different dataset types distinctly:
- Non-WKW datasets have
magsOpt
populated and use the first branch which includes axisOrder, channelIndex, and credentialId. - WKW datasets will have
wkwResolutionsOpt
populated and use the second branch which includes cubeLength. - The final branch is a fallback for legacy data.
This ensures appropriate fields are populated for each dataset type.
app/models/dataset/WKRemoteDataStoreClient.scala (1)
Learnt from: frcroth
PR: #8609
File: app/models/dataset/Dataset.scala:753-775
Timestamp: 2025-05-12T13:07:29.637Z
Learning: In the updateMags
method of DatasetMagsDAO (Scala), the code handles different dataset types distinctly:
- Non-WKW datasets have
magsOpt
populated and use the first branch which includes axisOrder, channelIndex, and credentialId. - WKW datasets will have
wkwResolutionsOpt
populated and use the second branch which includes cubeLength. - The final branch is a fallback for legacy data.
This ensures appropriate fields are populated for each dataset type.
webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/TSRemoteDatastoreClient.scala (4)
Learnt from: frcroth
PR: #8202
File: webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/DatasetErrorLoggingService.scala:68-72
Timestamp: 2024-11-25T10:02:03.702Z
Learning: In DatasetErrorLoggingService.scala
, prefer using TextUtils.stackTraceAsString(exception)
when logging exceptions instead of passing the exception directly to logger.error
.
Learnt from: MichaelBuessemeyer
PR: #8236
File: webknossos-datastore/app/com/scalableminds/webknossos/datastore/datareaders/precomputed/NeuroglancerPrecomputedShardingUtils.scala:23-28
Timestamp: 2025-04-25T11:06:13.275Z
Learning: AlfuCache in the WebKnossos codebase has default configured TTL (time-to-live) and size limits, providing automatic protection against unbounded memory growth.
Learnt from: frcroth
PR: #8236
File: webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/mesh/MeshFileService.scala:170-173
Timestamp: 2025-04-23T08:51:57.756Z
Learning: In the webknossos codebase, classes extending FoxImplicits
have access to an implicit conversion from Option[A]
to Fox[A]
, where None
is converted to an empty Fox that fails gracefully in for-comprehensions.
Learnt from: MichaelBuessemeyer
PR: #8352
File: app/models/organization/CreditTransactionService.scala:0-0
Timestamp: 2025-01-27T12:06:42.865Z
Learning: In Scala's for-comprehension with Fox (Future-like type), the <-
operator ensures sequential execution. If any step fails, the entire chain short-circuits and returns early, preventing subsequent operations from executing. This makes it safe to perform validation checks before database operations.
app/models/annotation/AnnotationService.scala (2)
Learnt from: frcroth
PR: #8609
File: app/models/dataset/Dataset.scala:753-775
Timestamp: 2025-05-12T13:07:29.637Z
Learning: In the updateMags
method of DatasetMagsDAO (Scala), the code handles different dataset types distinctly:
- Non-WKW datasets have
magsOpt
populated and use the first branch which includes axisOrder, channelIndex, and credentialId. - WKW datasets will have
wkwResolutionsOpt
populated and use the second branch which includes cubeLength. - The final branch is a fallback for legacy data.
This ensures appropriate fields are populated for each dataset type.
Learnt from: dieknolle3333
PR: #8168
File: frontend/javascripts/oxalis/model/sagas/proofread_saga.ts:1039-1039
Timestamp: 2024-11-22T17:18:04.217Z
Learning: In frontend/javascripts/oxalis/model/sagas/proofread_saga.ts
, when calling getMagInfo
, the use of volumeTracingLayer.resolutions
is intentional and should not be changed to volumeTracingLayer.mags
.
conf/webknossos.latest.routes (1)
Learnt from: frcroth
PR: #8609
File: app/models/dataset/Dataset.scala:753-775
Timestamp: 2025-05-12T13:07:29.637Z
Learning: In the updateMags
method of DatasetMagsDAO (Scala), the code handles different dataset types distinctly:
- Non-WKW datasets have
magsOpt
populated and use the first branch which includes axisOrder, channelIndex, and credentialId. - WKW datasets will have
wkwResolutionsOpt
populated and use the second branch which includes cubeLength. - The final branch is a fallback for legacy data.
This ensures appropriate fields are populated for each dataset type.
webknossos-datastore/conf/datastore.versioned.routes (1)
Learnt from: frcroth
PR: #8609
File: app/models/dataset/Dataset.scala:753-775
Timestamp: 2025-05-12T13:07:29.637Z
Learning: In the updateMags
method of DatasetMagsDAO (Scala), the code handles different dataset types distinctly:
- Non-WKW datasets have
magsOpt
populated and use the first branch which includes axisOrder, channelIndex, and credentialId. - WKW datasets will have
wkwResolutionsOpt
populated and use the second branch which includes cubeLength. - The final branch is a fallback for legacy data.
This ensures appropriate fields are populated for each dataset type.
webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/tracings/volume/VolumeTracingService.scala (4)
Learnt from: dieknolle3333
PR: #8168
File: frontend/javascripts/oxalis/model/sagas/proofread_saga.ts:1039-1039
Timestamp: 2024-11-22T17:18:04.217Z
Learning: In frontend/javascripts/oxalis/model/sagas/proofread_saga.ts
, when calling getMagInfo
, the use of volumeTracingLayer.resolutions
is intentional and should not be changed to volumeTracingLayer.mags
.
Learnt from: frcroth
PR: #8609
File: app/models/dataset/Dataset.scala:753-775
Timestamp: 2025-05-12T13:07:29.637Z
Learning: In the updateMags
method of DatasetMagsDAO (Scala), the code handles different dataset types distinctly:
- Non-WKW datasets have
magsOpt
populated and use the first branch which includes axisOrder, channelIndex, and credentialId. - WKW datasets will have
wkwResolutionsOpt
populated and use the second branch which includes cubeLength. - The final branch is a fallback for legacy data.
This ensures appropriate fields are populated for each dataset type.
Learnt from: frcroth
PR: #8236
File: webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/mesh/MeshFileService.scala:170-173
Timestamp: 2025-04-23T08:51:57.756Z
Learning: In the webknossos codebase, classes extending FoxImplicits
have access to an implicit conversion from Option[A]
to Fox[A]
, where None
is converted to an empty Fox that fails gracefully in for-comprehensions.
Learnt from: MichaelBuessemeyer
PR: #8352
File: app/models/organization/CreditTransactionService.scala:0-0
Timestamp: 2025-01-27T12:06:42.865Z
Learning: In Scala's for-comprehension with Fox (Future-like type), the <-
operator ensures sequential execution. If any step fails, the entire chain short-circuits and returns early, preventing subsequent operations from executing. This makes it safe to perform validation checks before database operations.
app/models/dataset/Dataset.scala (2)
Learnt from: frcroth
PR: #8609
File: app/models/dataset/Dataset.scala:753-775
Timestamp: 2025-05-12T13:07:29.637Z
Learning: In the updateMags
method of DatasetMagsDAO (Scala), the code handles different dataset types distinctly:
- Non-WKW datasets have
magsOpt
populated and use the first branch which includes axisOrder, channelIndex, and credentialId. - WKW datasets will have
wkwResolutionsOpt
populated and use the second branch which includes cubeLength. - The final branch is a fallback for legacy data.
This ensures appropriate fields are populated for each dataset type.
Learnt from: frcroth
PR: #8609
File: conf/evolutions/133-datasource-properties-in-db.sql:8-16
Timestamp: 2025-05-12T14:15:05.259Z
Learning: The database schema in WEBKNOSSOS has separate tables for dataset layers (dataset_layers
) and magnifications (dataset_mags
). The dataFormat
field is stored in the layers table while magnification-specific fields like cubeLength
(specific to WKW format) are stored in the mags table.
webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/DSRemoteWebknossosClient.scala (2)
Learnt from: MichaelBuessemeyer
PR: #8352
File: app/models/organization/CreditTransactionService.scala:0-0
Timestamp: 2025-01-27T12:06:42.865Z
Learning: In Scala's for-comprehension with Fox (Future-like type), the <-
operator ensures sequential execution. If any step fails, the entire chain short-circuits and returns early, preventing subsequent operations from executing. This makes it safe to perform validation checks before database operations.
Learnt from: frcroth
PR: #8236
File: webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/mesh/MeshFileService.scala:170-173
Timestamp: 2025-04-23T08:51:57.756Z
Learning: In the webknossos codebase, classes extending FoxImplicits
have access to an implicit conversion from Option[A]
to Fox[A]
, where None
is converted to an empty Fox that fails gracefully in for-comprehensions.
webknossos-datastore/app/com/scalableminds/webknossos/datastore/controllers/DSMeshController.scala (1)
Learnt from: frcroth
PR: #8609
File: app/models/dataset/Dataset.scala:753-775
Timestamp: 2025-05-12T13:07:29.637Z
Learning: In the updateMags
method of DatasetMagsDAO (Scala), the code handles different dataset types distinctly:
- Non-WKW datasets have
magsOpt
populated and use the first branch which includes axisOrder, channelIndex, and credentialId. - WKW datasets will have
wkwResolutionsOpt
populated and use the second branch which includes cubeLength. - The final branch is a fallback for legacy data.
This ensures appropriate fields are populated for each dataset type.
frontend/javascripts/admin/api/mesh.ts (3)
Learnt from: dieknolle3333
PR: #8229
File: frontend/javascripts/oxalis/model/accessors/dataset_accessor.ts:348-354
Timestamp: 2024-11-25T14:38:49.345Z
Learning: For the getDatasetExtentAsProduct
function in dataset_accessor.ts
, input validation for negative or zero dimensions is not necessary.
Learnt from: philippotto
PR: #8602
File: frontend/javascripts/oxalis/model/volumetracing/volume_annotation_sampling.ts:365-366
Timestamp: 2025-05-07T06:17:32.810Z
Learning: The parameter in applyVoxelMap was renamed from sliceCount
to sliceOffset
to better reflect its purpose, but this doesn't affect existing call sites since JavaScript/TypeScript function calls are position-based.
Learnt from: dieknolle3333
PR: #8168
File: frontend/javascripts/oxalis/geometries/materials/plane_material_factory.ts:568-585
Timestamp: 2024-11-22T17:18:43.411Z
Learning: In the file frontend/javascripts/oxalis/geometries/materials/plane_material_factory.ts
, the uniform name allResolutions
should remain unchanged to maintain consistency with the shader code.
app/models/dataset/DatasetService.scala (5)
Learnt from: frcroth
PR: #8609
File: app/models/dataset/Dataset.scala:753-775
Timestamp: 2025-05-12T13:07:29.637Z
Learning: In the updateMags
method of DatasetMagsDAO (Scala), the code handles different dataset types distinctly:
- Non-WKW datasets have
magsOpt
populated and use the first branch which includes axisOrder, channelIndex, and credentialId. - WKW datasets will have
wkwResolutionsOpt
populated and use the second branch which includes cubeLength. - The final branch is a fallback for legacy data.
This ensures appropriate fields are populated for each dataset type.
Learnt from: MichaelBuessemeyer
PR: #8352
File: app/models/organization/CreditTransactionService.scala:0-0
Timestamp: 2025-01-27T12:06:42.865Z
Learning: In Scala's for-comprehension with Fox (Future-like type), the <-
operator ensures sequential execution. If any step fails, the entire chain short-circuits and returns early, preventing subsequent operations from executing. This makes it safe to perform validation checks before database operations.
Learnt from: frcroth
PR: #8236
File: webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/mesh/MeshFileService.scala:170-173
Timestamp: 2025-04-23T08:51:57.756Z
Learning: In the webknossos codebase, classes extending FoxImplicits
have access to an implicit conversion from Option[A]
to Fox[A]
, where None
is converted to an empty Fox that fails gracefully in for-comprehensions.
Learnt from: frcroth
PR: #8236
File: webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/mesh/NeuroglancerPrecomputedMeshFileService.scala:161-166
Timestamp: 2025-04-28T14:18:04.368Z
Learning: In Scala for-comprehensions with the Fox error handling monad, Fox.fromBool()
expressions should use the <-
binding operator instead of the =
assignment operator to properly propagate error conditions. Using =
will cause validation failures to be silently ignored.
Learnt from: frcroth
PR: #8609
File: conf/evolutions/133-datasource-properties-in-db.sql:8-16
Timestamp: 2025-05-12T14:15:05.259Z
Learning: The database schema in WEBKNOSSOS has separate tables for dataset layers (dataset_layers
) and magnifications (dataset_mags
). The dataFormat
field is stored in the layers table while magnification-specific fields like cubeLength
(specific to WKW format) are stored in the mags table.
webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/mesh/DSFullMeshService.scala (2)
Learnt from: frcroth
PR: #8236
File: webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/mesh/MeshFileService.scala:170-173
Timestamp: 2025-04-23T08:51:57.756Z
Learning: In the webknossos codebase, classes extending FoxImplicits
have access to an implicit conversion from Option[A]
to Fox[A]
, where None
is converted to an empty Fox that fails gracefully in for-comprehensions.
Learnt from: MichaelBuessemeyer
PR: #8352
File: app/models/organization/CreditTransactionService.scala:0-0
Timestamp: 2025-01-27T12:06:42.865Z
Learning: In Scala's for-comprehension with Fox (Future-like type), the <-
operator ensures sequential execution. If any step fails, the entire chain short-circuits and returns early, preventing subsequent operations from executing. This makes it safe to perform validation checks before database operations.
webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/tracings/RemoteFallbackLayer.scala (3)
Learnt from: frcroth
PR: #8202
File: webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/DatasetErrorLoggingService.scala:68-72
Timestamp: 2024-11-25T10:02:03.702Z
Learning: In DatasetErrorLoggingService.scala
, prefer using TextUtils.stackTraceAsString(exception)
when logging exceptions instead of passing the exception directly to logger.error
.
Learnt from: frcroth
PR: #8236
File: webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/mesh/MeshFileService.scala:170-173
Timestamp: 2025-04-23T08:51:57.756Z
Learning: In the webknossos codebase, classes extending FoxImplicits
have access to an implicit conversion from Option[A]
to Fox[A]
, where None
is converted to an empty Fox that fails gracefully in for-comprehensions.
Learnt from: MichaelBuessemeyer
PR: #8352
File: app/models/organization/CreditTransactionService.scala:0-0
Timestamp: 2025-01-27T12:06:42.865Z
Learning: In Scala's for-comprehension with Fox (Future-like type), the <-
operator ensures sequential execution. If any step fails, the entire chain short-circuits and returns early, preventing subsequent operations from executing. This makes it safe to perform validation checks before database operations.
app/controllers/WKRemoteDataStoreController.scala (1)
Learnt from: frcroth
PR: #8609
File: app/models/dataset/Dataset.scala:753-775
Timestamp: 2025-05-12T13:07:29.637Z
Learning: In the updateMags
method of DatasetMagsDAO (Scala), the code handles different dataset types distinctly:
- Non-WKW datasets have
magsOpt
populated and use the first branch which includes axisOrder, channelIndex, and credentialId. - WKW datasets will have
wkwResolutionsOpt
populated and use the second branch which includes cubeLength. - The final branch is a fallback for legacy data.
This ensures appropriate fields are populated for each dataset type.
webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/ZarrStreamingService.scala (1)
Learnt from: frcroth
PR: #8609
File: app/models/dataset/Dataset.scala:753-775
Timestamp: 2025-05-12T13:07:29.637Z
Learning: In the updateMags
method of DatasetMagsDAO (Scala), the code handles different dataset types distinctly:
- Non-WKW datasets have
magsOpt
populated and use the first branch which includes axisOrder, channelIndex, and credentialId. - WKW datasets will have
wkwResolutionsOpt
populated and use the second branch which includes cubeLength. - The final branch is a fallback for legacy data.
This ensures appropriate fields are populated for each dataset type.
webknossos-datastore/app/com/scalableminds/webknossos/datastore/dataformats/layers/WKWDataLayers.scala (5)
Learnt from: frcroth
PR: #8609
File: app/models/dataset/Dataset.scala:753-775
Timestamp: 2025-05-12T13:07:29.637Z
Learning: In the updateMags
method of DatasetMagsDAO (Scala), the code handles different dataset types distinctly:
- Non-WKW datasets have
magsOpt
populated and use the first branch which includes axisOrder, channelIndex, and credentialId. - WKW datasets will have
wkwResolutionsOpt
populated and use the second branch which includes cubeLength. - The final branch is a fallback for legacy data.
This ensures appropriate fields are populated for each dataset type.
Learnt from: dieknolle3333
PR: #8168
File: frontend/javascripts/oxalis/model/sagas/volumetracing_saga.tsx:433-434
Timestamp: 2024-11-22T17:19:07.947Z
Learning: In the codebase, certain usages of segmentationLayer.resolutions
are intentionally retained and should not be changed to segmentationLayer.mags
during refactoring.
Learnt from: frcroth
PR: #8609
File: conf/evolutions/133-datasource-properties-in-db.sql:8-16
Timestamp: 2025-05-12T14:15:05.259Z
Learning: The database schema in WEBKNOSSOS has separate tables for dataset layers (dataset_layers
) and magnifications (dataset_mags
). The dataFormat
field is stored in the layers table while magnification-specific fields like cubeLength
(specific to WKW format) are stored in the mags table.
Learnt from: dieknolle3333
PR: #8168
File: frontend/javascripts/oxalis/model/sagas/proofread_saga.ts:1039-1039
Timestamp: 2024-11-22T17:18:04.217Z
Learning: In frontend/javascripts/oxalis/model/sagas/proofread_saga.ts
, when calling getMagInfo
, the use of volumeTracingLayer.resolutions
is intentional and should not be changed to volumeTracingLayer.mags
.
Learnt from: dieknolle3333
PR: #8168
File: frontend/javascripts/oxalis/model/reducers/skeletontracing_reducer_helpers.ts:125-125
Timestamp: 2024-11-22T17:17:39.914Z
Learning: In the createNode
function within skeletontracing_reducer_helpers.ts
, the property resolution
is intentionally kept even when the parameter is renamed to mag
.
webknossos-datastore/conf/datastore.latest.routes (3)
Learnt from: frcroth
PR: #8609
File: app/models/dataset/Dataset.scala:753-775
Timestamp: 2025-05-12T13:07:29.637Z
Learning: In the updateMags
method of DatasetMagsDAO (Scala), the code handles different dataset types distinctly:
- Non-WKW datasets have
magsOpt
populated and use the first branch which includes axisOrder, channelIndex, and credentialId. - WKW datasets will have
wkwResolutionsOpt
populated and use the second branch which includes cubeLength. - The final branch is a fallback for legacy data.
This ensures appropriate fields are populated for each dataset type.
Learnt from: frcroth
PR: #8598
File: webknossos-datastore/app/com/scalableminds/webknossos/datastore/models/datasource/DatasetLayerAttachments.scala:89-95
Timestamp: 2025-06-02T09:49:51.047Z
Learning: In WebKnossos dataset layer attachments, multiple file types can safely use the same directory name (like "agglomerates") because the scanning logic filters by file extension. For example, AgglomerateFileInfo scans for .hdf5 files while CumsumFileInfo scans for .json files in the same "agglomerates" directory without interference.
Learnt from: frcroth
PR: #8609
File: conf/evolutions/133-datasource-properties-in-db.sql:8-16
Timestamp: 2025-05-12T14:15:05.259Z
Learning: The database schema in WEBKNOSSOS has separate tables for dataset layers (dataset_layers
) and magnifications (dataset_mags
). The dataFormat
field is stored in the layers table while magnification-specific fields like cubeLength
(specific to WKW format) are stored in the mags table.
app/models/dataset/ComposeService.scala (1)
Learnt from: frcroth
PR: #8609
File: app/models/dataset/Dataset.scala:753-775
Timestamp: 2025-05-12T13:07:29.637Z
Learning: In the updateMags
method of DatasetMagsDAO (Scala), the code handles different dataset types distinctly:
- Non-WKW datasets have
magsOpt
populated and use the first branch which includes axisOrder, channelIndex, and credentialId. - WKW datasets will have
wkwResolutionsOpt
populated and use the second branch which includes cubeLength. - The final branch is a fallback for legacy data.
This ensures appropriate fields are populated for each dataset type.
webknossos-datastore/app/com/scalableminds/webknossos/datastore/controllers/ZarrStreamingController.scala (1)
Learnt from: frcroth
PR: #8609
File: app/models/dataset/Dataset.scala:753-775
Timestamp: 2025-05-12T13:07:29.637Z
Learning: In the updateMags
method of DatasetMagsDAO (Scala), the code handles different dataset types distinctly:
- Non-WKW datasets have
magsOpt
populated and use the first branch which includes axisOrder, channelIndex, and credentialId. - WKW datasets will have
wkwResolutionsOpt
populated and use the second branch which includes cubeLength. - The final branch is a fallback for legacy data.
This ensures appropriate fields are populated for each dataset type.
webknossos-datastore/app/com/scalableminds/webknossos/datastore/controllers/BinaryDataController.scala (4)
Learnt from: frcroth
PR: #8609
File: app/models/dataset/Dataset.scala:753-775
Timestamp: 2025-05-12T13:07:29.637Z
Learning: In the updateMags
method of DatasetMagsDAO (Scala), the code handles different dataset types distinctly:
- Non-WKW datasets have
magsOpt
populated and use the first branch which includes axisOrder, channelIndex, and credentialId. - WKW datasets will have
wkwResolutionsOpt
populated and use the second branch which includes cubeLength. - The final branch is a fallback for legacy data.
This ensures appropriate fields are populated for each dataset type.
Learnt from: frcroth
PR: #8236
File: webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/mesh/MeshFileService.scala:170-173
Timestamp: 2025-04-23T08:51:57.756Z
Learning: In the webknossos codebase, classes extending FoxImplicits
have access to an implicit conversion from Option[A]
to Fox[A]
, where None
is converted to an empty Fox that fails gracefully in for-comprehensions.
Learnt from: MichaelBuessemeyer
PR: #8352
File: app/models/organization/CreditTransactionService.scala:0-0
Timestamp: 2025-01-27T12:06:42.865Z
Learning: In Scala's for-comprehension with Fox (Future-like type), the <-
operator ensures sequential execution. If any step fails, the entire chain short-circuits and returns early, preventing subsequent operations from executing. This makes it safe to perform validation checks before database operations.
Learnt from: MichaelBuessemeyer
PR: #8325
File: webknossos-datastore/app/com/scalableminds/webknossos/datastore/image/ImageCreator.scala:0-0
Timestamp: 2025-02-24T08:22:51.814Z
Learning: In ImageCreator.scala, precision loss during uint64 to double conversion is acceptable because the values are only used for image generation where each color component is ultimately represented as a single byte in ARGB format. For grayscale values (except uint24 which represents RGB), each color only uses 8 bits anyway.
frontend/javascripts/admin/rest_api.ts (1)
Learnt from: dieknolle3333
PR: #8229
File: frontend/javascripts/oxalis/model/accessors/dataset_accessor.ts:348-354
Timestamp: 2024-11-25T14:38:49.345Z
Learning: For the getDatasetExtentAsProduct
function in dataset_accessor.ts
, input validation for negative or zero dimensions is not necessary.
webknossos-datastore/app/com/scalableminds/webknossos/datastore/controllers/DataSourceController.scala (3)
Learnt from: frcroth
PR: #8609
File: app/models/dataset/Dataset.scala:753-775
Timestamp: 2025-05-12T13:07:29.637Z
Learning: In the updateMags
method of DatasetMagsDAO (Scala), the code handles different dataset types distinctly:
- Non-WKW datasets have
magsOpt
populated and use the first branch which includes axisOrder, channelIndex, and credentialId. - WKW datasets will have
wkwResolutionsOpt
populated and use the second branch which includes cubeLength. - The final branch is a fallback for legacy data.
This ensures appropriate fields are populated for each dataset type.
Learnt from: frcroth
PR: #8236
File: webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/mesh/MeshFileService.scala:170-173
Timestamp: 2025-04-23T08:51:57.756Z
Learning: In the webknossos codebase, classes extending FoxImplicits
have access to an implicit conversion from Option[A]
to Fox[A]
, where None
is converted to an empty Fox that fails gracefully in for-comprehensions.
Learnt from: MichaelBuessemeyer
PR: #8352
File: app/models/organization/CreditTransactionService.scala:0-0
Timestamp: 2025-01-27T12:06:42.865Z
Learning: In Scala's for-comprehension with Fox (Future-like type), the <-
operator ensures sequential execution. If any step fails, the entire chain short-circuits and returns early, preventing subsequent operations from executing. This makes it safe to perform validation checks before database operations.
🧬 Code Graph Analysis (13)
frontend/javascripts/viewer/model/sagas/load_histogram_data_saga.ts (2)
webknossos-datastore/app/com/scalableminds/webknossos/datastore/controllers/LegacyController.scala (1)
histogram
(332-347)frontend/javascripts/admin/rest_api.ts (1)
getHistogramForLayer
(1424-1435)
frontend/javascripts/dashboard/dataset/dataset_settings_delete_tab.tsx (1)
frontend/javascripts/admin/rest_api.ts (1)
deleteDatasetOnDisk
(1347-1357)
frontend/javascripts/admin/dataset/composition_wizard/04_configure_new_dataset.tsx (1)
frontend/javascripts/admin/rest_api.ts (1)
createDatasetComposition
(1104-1122)
frontend/javascripts/test/backend-snapshot-tests/datasets.e2e.ts (2)
app/models/job/Job.scala (2)
datasetName
(53-53)datasetId
(55-55)frontend/javascripts/viewer/api/wk_dev.ts (1)
api
(74-80)
webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/DatasetCache.scala (2)
webknossos-datastore/app/com/scalableminds/webknossos/datastore/controllers/DataSourceController.scala (1)
invalidateCache
(685-690)app/models/job/Job.scala (1)
datasetId
(55-55)
frontend/javascripts/viewer/view/right-border-tabs/connectome_tab/connectome_view.tsx (1)
frontend/javascripts/types/api_types.ts (1)
APIDataset
(242-245)
frontend/javascripts/dashboard/dataset/dataset_settings_view.tsx (1)
frontend/javascripts/admin/rest_api.ts (1)
updateDatasetDatasource
(984-995)
webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/TSRemoteWebknossosClient.scala (2)
app/models/job/Job.scala (1)
datasetId
(55-55)webknossos-datastore/app/com/scalableminds/webknossos/datastore/rpc/RPCRequest.scala (3)
addQueryString
(28-31)silent
(64-67)getWithJsonResponse
(100-103)
app/models/dataset/WKRemoteDataStoreClient.scala (3)
webknossos-datastore/app/com/scalableminds/webknossos/datastore/rpc/RPCRequest.scala (2)
addQueryString
(28-31)postJsonWithBytesResponse
(152-156)app/controllers/UserTokenController.scala (1)
RpcTokenHolder
(30-38)webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/TSRemoteDatastoreClient.scala (1)
hasSegmentIndexFile
(111-117)
webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/TSRemoteDatastoreClient.scala (4)
util/src/main/scala/com/scalableminds/util/cache/AlfuCache.scala (3)
AlfuCache
(15-42)AlfuCache
(44-78)getOrLoad
(17-27)webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/TSRemoteWebknossosClient.scala (2)
getDatasetIdForAnnotation
(77-88)getDataStoreUriForDataSource
(71-77)util/src/main/scala/com/scalableminds/util/tools/Fox.scala (3)
s
(229-233)s
(233-243)toFox
(14-14)webknossos-datastore/app/com/scalableminds/webknossos/datastore/rpc/RPCRequest.scala (2)
withTokenFromContext
(33-36)getWithJsonResponse
(100-103)
app/models/annotation/AnnotationService.scala (2)
app/models/dataset/WKRemoteDataStoreClient.scala (1)
hasSegmentIndexFile
(80-90)util/src/main/scala/com/scalableminds/util/objectid/ObjectId.scala (1)
toString
(12-12)
webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/tracings/volume/VolumeTracingService.scala (2)
util/src/main/scala/com/scalableminds/util/tools/Fox.scala (3)
Fox
(30-223)Fox
(225-289)runOptional
(159-169)webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/tracings/RemoteFallbackLayer.scala (3)
RemoteFallbackLayer
(18-18)RemoteFallbackLayer
(20-25)fromDataLayerAndDatasetId
(21-24)
frontend/javascripts/admin/api/mesh.ts (3)
app/models/job/Job.scala (1)
datasetId
(55-55)webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/mesh/MeshFileService.scala (2)
MeshChunkDataRequestList
(41-44)MeshChunkDataRequestList
(50-52)frontend/javascripts/admin/api/token.ts (1)
doWithToken
(39-74)
🪛 LanguageTool
docs/data/concepts.md
[style] ~104-~104: For conciseness, consider replacing this expression with an adverb.
Context: ...ynonymously for mags throughout the UI. At the moment, WebKnossos guarantees correct renderin...
(AT_THE_MOMENT)
🪛 markdownlint-cli2 (0.17.2)
docs/data/concepts.md
125-125: Unordered list style
Expected: dash; Actual: plus
(MD004, ul-style)
126-126: Unordered list style
Expected: dash; Actual: asterisk
(MD004, ul-style)
127-127: Unordered list style
Expected: dash; Actual: asterisk
(MD004, ul-style)
🔇 Additional comments (140)
unreleased_changes/8708.md (1)
1-6
: LGTM! Well-structured changelog entry.The changelog appropriately documents the key changes from this PR: removal of directory creation for remote datasets and the addition of the virtual datasets database evolution. The reference to the specific evolution file helps users understand the database changes required.
tools/postgres/schema.sql (2)
24-24
: LGTM! Correct schema version increment.The schema version is properly incremented to 136, matching the evolution script referenced in the changelog.
110-110
: LGTM! Appropriate column addition for virtual datasets.The
isVirtual
boolean column withDEFAULT FALSE
is correctly added to support the virtual datasets feature. The default value ensures existing datasets are properly handled as non-virtual.frontend/javascripts/viewer/model/sagas/load_histogram_data_saga.ts (1)
40-40
: LGTM! Correctly updated to use dataset ID.The change from passing the entire
dataset
object to justdataset.id
aligns with the API refactoring to use dataset IDs consistently. This matches the updatedgetHistogramForLayer
function signature in the REST API.frontend/javascripts/viewer/model/sagas/meshes/precomputed_mesh_saga.ts (2)
287-287
: LGTM! Correctly updated to use dataset ID.The change from
dataset
todataset.id
in thegetMeshfileChunksForSegment
call aligns with the API refactoring to use dataset IDs consistently throughout the system.
367-367
: LGTM! Correctly updated to use dataset ID.The change from
dataset
todataset.id
in thegetMeshfileChunkData
call is consistent with the dataset ID refactoring and matches the previous change in this file.frontend/javascripts/dashboard/dataset/dataset_settings_delete_tab.tsx (1)
45-45
: LGTM! Simplified API call with full dataset object.The change to pass the entire
dataset
object directly todeleteDatasetOnDisk
simplifies the code and aligns with the API's expected signature. This eliminates the need to construct a separatedataSourceId
object and is consistent with the broader refactoring to use dataset objects/IDs throughout the system.webknossos-datastore/app/com/scalableminds/webknossos/datastore/DataStoreModule.scala (1)
67-67
: LGTM: Service binding follows established patternsThe addition of
ZarrStreamingService
as an eager singleton is consistent with other service bindings in this module and appropriate for integrating the new Zarr streaming functionality.webknossos-tracingstore/conf/tracingstore.versioned.routes (1)
3-3
: LGTM: API versioning follows established patternThe addition of
/v10/
route mapping is consistent with the existing versioning scheme and properly routes to the same handler as other versions.frontend/javascripts/viewer/view/right-border-tabs/segments_tab/segments_view_helper.tsx (1)
79-79
: LGTM: Dataset identification updated to use dataset IDThe change from
dataset.directoryName
todataset.id
aligns with the broader refactoring to use unified dataset IDs instead of organization/directory name pairs. This simplifies dataset identification and is consistent with the virtual datasets feature.webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/DatasetCache.scala (1)
29-29
: LGTM: Type-safe parameter eliminates unnecessary conversionChanging the parameter type from
String
toObjectId
improves type safety and eliminates the need for string-to-ObjectId conversion, making the API consistent with the cache key type and the calling controller's expectations.frontend/javascripts/test/model/binary/layers/wkstore_adapter.spec.ts (3)
39-39
: LGTM: Mock dataset updated with dataset IDAdding the
id
field to the mock dataset aligns with the dataset ID-based refactoring and ensures tests use the new dataset structure.
162-167
: LGTM: URL expectations updated for dataset ID routingThe expected URLs correctly use
datasetId
instead of the previousorganization/datasetPath
format, aligning with the new dataset ID-based API routing structure.
198-199
: LGTM: Consistent URL format across all test casesAll test cases correctly use the new dataset ID format in expected URLs, ensuring comprehensive test coverage for the updated routing structure.
Also applies to: 218-219, 237-238
frontend/javascripts/viewer/view/right-border-tabs/connectome_tab/connectome_view.tsx (2)
15-15
: LGTM: Import statement correctly updatedThe removal of
APIDataSourceId
from imports aligns with the transition to dataset-centric identification.
353-358
: Synapse API functions now accept APIDatasetAll of the listed functions in
frontend/javascripts/admin/rest_api.ts
—includinggetSynapsesOfAgglomerates
,getSynapseSourcesOrDestinations
(and itsgetSynapseSources
/getSynapseDestinations
wrappers),getSynapsePositions
, andgetSynapseTypes
—have been updated to take anAPIDataset
parameter instead ofAPIDataSourceId
. No further changes are required here.conf/webknossos.versioned.routes (2)
7-7
: LGTM: Clear version documentationThe version log comment clearly documents the v10 change from organization/name-based dataset access to ID-based access.
18-18
: LGTM: Correct route versioningThe v10 route mapping follows the established pattern and is correctly positioned before v9 for proper priority handling.
webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/tracings/volume/VolumeTracingService.scala (1)
977-979
: LGTM: Correct transition to dataset ID-based fallback layer creationThe changes properly implement the transition from composite DataSource IDs to unified dataset IDs:
- The
getDatasetIdForAnnotation
call correctly fetches the dataset ID as aFox[String]
- The
fromDataLayerAndDatasetId
factory method is the appropriate replacement for the previousfromDataLayerAndDataSource
- The Fox monadic flow properly handles the asynchronous dataset ID retrieval
The code maintains proper error handling and follows the established patterns in the codebase.
frontend/javascripts/viewer/model/bucket_data_handling/wkstore_adapter.ts (2)
100-100
: LGTM: Simplified dataset identificationThe change to use
state.dataset.id
instead of separate organization and directory name components correctly aligns with the unified dataset ID approach.
105-105
: LGTM: Updated URL construction for dataset ID-based APIThe URL construction now correctly uses the dataset ID pattern
/data/datasets/${datasetId}/layers/...
which aligns with the backend API changes.webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/DataSourceService.scala (1)
272-272
: LGTM: Method rename improves semantic clarity.The rename from
updateDataSource
toupdateDataSourceOnDisk
better reflects the method's purpose and distinguishes it from potential remote update operations.frontend/javascripts/types/schemas/datasource.types.ts (1)
17-19
: LGTM: Type simplification reduces redundancy.The intersection type approach eliminates duplicate property declarations since
BaseRemoteLayer
already providesboundingBox
andmags
(formerlywkwResolutions
). This change aligns with the broader schema refactoring for WKW support.conf/evolutions/reversions/136-virtual-datasets.sql (1)
5-7
: Critical issue: View recreation will fail after column drop.The reversion script drops the
isVirtual
column (line 6) but then tries to recreate the view with a definition that may still reference logic dependent on that column. This will cause the migration to fail.The view should be recreated with the exact pre-136 definition. Verify what the original view definition was before the
isVirtual
column was added and use that definition here.#!/bin/bash # Check the original view definition before virtual datasets were introduced git log --follow -p -- conf/evolutions/reversions/136-virtual-datasets.sql | grep -A 10 -B 10 "CREATE VIEW webknossos.datasets_"⛔ Skipped due to learnings
Learnt from: frcroth PR: scalableminds/webknossos#8609 File: conf/evolutions/133-datasource-properties-in-db.sql:8-16 Timestamp: 2025-05-12T14:15:05.259Z Learning: The database schema in WEBKNOSSOS has separate tables for dataset layers (`dataset_layers`) and magnifications (`dataset_mags`). The `dataFormat` field is stored in the layers table while magnification-specific fields like `cubeLength` (specific to WKW format) are stored in the mags table.
conf/evolutions/136-virtual-datasets.sql (2)
6-6
: LGTM: Column addition follows best practices.The
isVirtual
column is properly defined with NOT NULL constraint and sensible default value (FALSE).
7-7
: Review datasets_ view filterI didn’t find any existing views filtering on isVirtual—only the evolution and schema files reference the new column. If the intent is to hide virtual datasets in this view, you’ll need to update both the forward and reversion scripts to include an
AND NOT isVirtual
clause. Otherwise, the current definition (filtering only onisDeleted
) is intentional.• conf/evolutions/136-virtual-datasets.sql:7
• conf/evolutions/reversions/136-virtual-datasets.sql:7Suggested change:
- CREATE VIEW webknossos.datasets_ AS SELECT * FROM webknossos.datasets WHERE NOT isDeleted; + CREATE VIEW webknossos.datasets_ AS SELECT * FROM webknossos.datasets WHERE NOT isDeleted AND NOT isVirtual;Please confirm the desired behavior.
webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/uploading/UploadService.scala (2)
444-444
: LGTM: Explicit None parameter maintains existing behavior.The explicit
None
arguments for thedatasetId
parameter correctly maintain the existing behavior while adapting to the updateddeleteOnDisk
method signature that supports the new dataset ID-based operations.Also applies to: 452-452
499-499
: LGTM: Method call updated to match service rename.The call to
updateDataSourceOnDisk
correctly reflects the method rename inDataSourceService.scala
, maintaining consistency across the codebase.frontend/javascripts/admin/dataset/composition_wizard/04_configure_new_dataset.tsx (1)
171-178
: LGTM! Function call simplified correctly.The removal of the datastore URL parameter aligns with the backend refactoring that moved dataset composition logic from the datastore service to a dedicated
ComposeService
in the main application. The API now directly handles composition through the/api/datasets/compose
endpoint without requiring the datastore URL.webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/AccessTokenService.scala (1)
46-47
: LGTM! ObjectId overloads added consistently.The new overloaded methods for
readDataset
,deleteDataset
, andwriteDataset
that acceptObjectId
parameters are well-implemented. They properly convert theObjectId
to string before creating theUserAccessRequest
, maintaining consistency with the existing pattern while supporting the broader refactoring to use dataset IDs throughout the system.Also applies to: 49-50, 55-56
frontend/javascripts/test/backend-snapshot-tests/datasets.e2e.ts (2)
99-107
: LGTM! Dynamic dataset ID retrieval implemented correctly.The
getTestDatasetId
helper function is well-implemented with proper error handling. It dynamically retrieves the dataset ID by querying active datasets and matching by name, which makes tests more robust and aligns with the refactoring to use dataset IDs consistently throughout the system.
110-111
: LGTM! Test cases updated to use dynamic dataset IDs.The test cases now properly use the dynamic dataset ID resolution instead of hardcoded values. The template literals correctly incorporate the resolved dataset ID into the API URLs, ensuring tests work with the current dataset identifier scheme and remain robust against dataset changes.
Also applies to: 117-118, 126-128, 137-137, 148-150
app/controllers/UserTokenController.scala (1)
185-191
: LGTM! Dataset deletion access control implemented consistently.The new
tryDelete
method for datasets follows the established pattern from the data source deletion logic. It properly validates:
- Dataset deletion is enabled in configuration
- The dataset exists (using
GlobalAccessContext
)- The user is an admin in the same organization as the dataset
The implementation correctly uses Fox error handling and integrates seamlessly with the existing access mode pattern matching.
Also applies to: 194-197
app/models/dataset/explore/WKExploreRemoteLayerService.scala (1)
108-124
: LGTM! Method refactored to use virtual dataset creation.The method was appropriately renamed to
addRemoteDatasourceToDatabase
and refactored to directly calldatasetService.createVirtualDataset
instead of using remote client calls. This change:
- Simplifies the implementation by removing the need for remote datastore client interactions
- Aligns with the new virtual dataset functionality introduced in this PR
- Uses the correct parameters for virtual dataset creation (dataSource directory name, organization ID, dataStore, etc.)
The implementation correctly leverages the new virtual dataset architecture while maintaining the same functional outcome.
webknossos-datastore/app/com/scalableminds/webknossos/datastore/models/datasource/DataLayer.scala (2)
493-508
: LGTM! WKW layer support is properly implemented.The
mapped
method correctly handlesWKWDataLayer
andWKWSegmentationLayer
by following the same pattern as other layer types, applying the transformation functions to all necessary properties.
522-523
: LGTM! Consistent implementation of getMags for WKW layers.The method correctly returns the
mags
property for both WKW layer types, maintaining consistency with other layer implementations.app/controllers/DatasetController.scala (3)
92-92
: LGTM! Proper dependency injection of ComposeService.The
composeService
parameter is correctly added to the constructor following the established dependency injection pattern.
494-500
: LGTM! Well-implemented compose endpoint.The
compose
method follows established controller patterns with proper security, JSON validation, and error handling. The response format is consistent with other endpoints.
149-152
: LGTM! Method name change reflects new virtual dataset approach.The change from
addRemoteDatasource
toaddRemoteDatasourceToDatabase
better describes the new functionality of creating virtual datasets in the database rather than registering remote datasources.docs/data/concepts.md (6)
26-26
: LGTM! Improved terminology for magnification steps.The updated text clearly establishes "mags" as the primary term while maintaining reference to other synonymous terms, aligning with the codebase refactoring.
104-105
: LGTM! Clear clarification of terminology usage.The documentation now explicitly states that "magnifications" and "mags" are synonymous in the UI, and updates the rendering guarantee to use consistent "mag factors" terminology.
72-77
: LGTM! Updated JSON metadata structure.The
mags
array withmag
andpath
properties is clearer than the previouswkwResolutions
structure and aligns with the new schema.
89-93
: LGTM! Consistent segmentation layer example.The segmentation layer example correctly uses the new
mags
structure, maintaining consistency with the color layer example.
103-103
: LGTM! Clear explanation of the new structure.The documentation clearly explains that
mag
is always a 3-element array and describes thepath
property, helping users understand the format.
125-127
: LGTM! Updated specification for new mags structure.The specification correctly describes the new
dataLayers.mags
format withmag
as a 3-tuple andpath
for data location, replacing the oldwkwResolutions
structure.frontend/javascripts/dashboard/dataset/dataset_settings_view.tsx (2)
369-369
: LGTM! Correctly updated API call to use dataset ID.The parameter order and types correctly match the new
updateDatasetDatasource
signature that usesdataset.id
instead ofdataset.directoryName
, aligning with the dataset-centric approach.
586-586
: LGTM! Improved prop passing to child component.Passing the full
dataset
object instead of justdataSourceId
provides more context toDatasetSettingsViewConfigTab
and aligns with the new dataset-centric API patterns.frontend/javascripts/dashboard/dataset/dataset_settings_viewconfig_tab.tsx (6)
26-26
: LGTM! Added necessary import for dataset type.The
APIDataset
import is correctly added to support the new prop type.
37-40
: LGTM! Updated props to use full dataset object.Changing from
dataSourceId
todataset: APIDataset | null | undefined
provides more context and aligns with the parent component changes.
47-53
: LGTM! Improved function signature and error handling.The
validateDefaultMappings
function correctly accepts the full dataset object and includes proper error handling for undefined dataset cases.
58-79
: LGTM! Well-implemented function updates with proper error handling.The mapping requests correctly use the dataset object, include proper null checks, and maintain the existing caching mechanism. The error handling provides clear, descriptive messages.
81-98
: LGTM! Validation logic correctly updated for new parameter structure.The validation logic properly uses the dataset-based mapping requests while maintaining the same error checking functionality.
312-312
: LGTM! Function call correctly updated with dataset parameter.The
validateDefaultMappings
call correctly passes thedataset
parameter instead of the old dataSourceId structure.app/controllers/AnnotationIOController.scala (2)
340-342
: LGTM! Method signature improvement aligns with dataset ID unification.The updated signature using
datasetId
string instead of separate organization and dataset name parameters simplifies the API and aligns with the broader refactoring to use dataset IDs consistently.
365-365
: LGTM! RPC call correctly updated to use new parameters.The call to
hasSegmentIndexFile
correctly uses the updated parameters (datasetId
andlayerName
) matching the new method signature.webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/TSRemoteWebknossosClient.scala (3)
53-53
: LGTM! Cache updated correctly for dataset ID storage.The cache rename and type change from
DataSourceId
toString
properly reflects the shift to using dataset IDs instead of composite DataSource identifiers.
71-72
: LGTM! Method simplified to use dataset ID parameter.The updated signature using a single
datasetId
parameter simplifies the API and the RPC path construction is clean and direct.
77-86
: LGTM! Method comprehensively updated for dataset ID retrieval.The method rename, return type change to
String
, and updated RPC endpoint path are all consistent with the dataset ID unification effort. The cache integration works correctly with the updated cache type.app/models/annotation/AnnotationService.scala (4)
124-124
: LGTM! Method signature updated to include dataset ID parameter.The addition of
datasetId: ObjectId
parameter is consistent with the dataset ID unification effort and provides the necessary data for updated RPC calls within the method.
142-142
: LGTM! RPC call correctly updated to use dataset ID.The call to
hasSegmentIndexFile
properly usesdatasetId.toString()
andlayer.name
, aligning with the updated method signature and the dataset ID unification approach.
241-241
: LGTM! Call site correctly updated with dataset ID parameter.The addition of
dataset._id
parameter to thecreateVolumeTracing
call is correct and provides the required dataset ID information.
435-435
: LGTM! Call site correctly updated with dataset ID parameter.The call to
createVolumeTracing
correctly passes thedatasetId
parameter, maintaining consistency with the updated method signature.frontend/javascripts/types/schemas/datasource.schema.ts (3)
52-52
: LGTM! Property renamed for consistency with schema standards.The rename from
wkwResolutions
tomags
aligns with the naming convention used in other data layer types and improves overall schema consistency.
57-81
: LGTM! Item properties enhanced with new functionality.The changes improve the schema by:
- Renaming
resolution
tomag
for consistency- Adding
path
for data location specification- Adding
credentials
with proper user/password structure for authenticated access- Adding
axisOrder
for flexible axis configurationThese additions enhance the schema's capability to handle remote and authenticated data sources.
83-87
: LGTM! Required fields updated correctly for schema consistency.The required fields updates properly reflect the property changes:
- Array items now require only
mag
instead of the removedresolution
andcubeLength
- Main object requirements updated from
wkwResolutions
tomags
This maintains schema validity while providing appropriate flexibility for optional properties.
webknossos-datastore/app/com/scalableminds/webknossos/datastore/helpers/DatasetDeleter.scala (3)
25-31
: LGTM - Clean addition for disk existence check.The new
existsOnDisk
method provides a clean way to check for dataset presence on disk, with proper support for datasets in conversion. The implementation correctly handles both regular and conversion directory structures.
35-35
: Good design for virtual dataset support.The addition of the optional
datasetId
parameter and conditional execution ofmoveSymlinks
is well-designed for virtual datasets, which may not have physical symlinks to manage.Also applies to: 76-76
85-89
: fetchPaths signature verifiedThe
DSRemoteWebknossosClient.fetchPaths
method is defined asdef fetchPaths(datasetId: ObjectId): Fox[List[LayerMagLinkInfo]]so passing the
datasetId: ObjectId
frommoveSymlinks
is correct. No further changes needed.frontend/javascripts/admin/api/mesh.ts (2)
29-34
: LGTM - Clean migration to dataset ID.The function signature change from composite
APIDataSourceId
to singledatasetId: string
simplifies the API and aligns with the unified dataset identification approach. The URL construction correctly uses the dataset ID.Also applies to: 59-59
79-84
: Consistent API simplification.The
getMeshfileChunkData
function follows the same pattern asgetMeshfileChunksForSegment
, maintaining consistency in the API redesign.Also applies to: 87-87
app/controllers/WKRemoteTracingStoreController.scala (2)
124-133
: Method rename aligns with unified dataset identification.The rename from
dataSourceIdForAnnotation
todatasetIdForAnnotation
and the simplified implementation returningdataset._id.toString
correctly reflects the shift to dataset-centric operations.
149-158
: Simplified parameter handling for dataset operations.The method now directly accepts
datasetId: ObjectId
instead of deriving it from organization and directory name, which simplifies the logic and aligns with the unified approach.webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/TSRemoteDatastoreClient.scala (2)
39-39
: Cache key simplification aligns with unified approach.Changing the cache key from
(String, String)
toString
for dataset ID-based caching is a good simplification that aligns with the unified dataset identification approach.
148-161
: Consistent dataset ID usage in client methods.The methods now consistently use
datasetId
directly instead of deriving it from annotation or composite identifiers. This simplifies the code and reduces complexity.app/models/dataset/Dataset.scala (4)
65-65
: Well-integrated isVirtual field addition.The new
isVirtual
field is properly integrated throughout the Dataset model - in the case class definition, parsing, and insertion queries. This supports the virtual datasets feature comprehensively.Also applies to: 149-149, 625-625, 634-634
522-529
: Useful utility method for virtual dataset management.The
getVirtualDatasetIds
method provides a clean way to retrieve virtual dataset IDs, which will be useful for virtual dataset-specific operations.
693-693
: Appropriate exclusion of virtual datasets from deactivation.Adding
NOT isVirtual
to thedeactivateUnreported
query correctly excludes virtual datasets from deactivation, since they don't have physical presence on datastores that could become "unreported".
777-780
: Enhanced magnification record with path field.The addition of the
path
field to theupdateMags
insertion query provides more complete magnification records, supporting the enhanced dataset management capabilities.webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/DSRemoteWebknossosClient.scala (5)
53-57
: LGTM! Well-structured case class for dataset registration.The
DataSourceRegistrationInfo
case class properly encapsulates theDataSource
and optionalfolderId
, supporting the new virtual dataset registration workflow. The JSON format is correctly defined.
144-156
: LGTM! Consistent dataset registration implementation.The
registerDataSource
method correctly:
- Accepts a
DataSource
,DataSourceId
, and optionalfolderId
- Creates a
DataSourceRegistrationInfo
wrapper- Uses the organization and directory name from the
DataSourceId
for the API path- Returns the response body as the dataset ID
The implementation follows the established RPC pattern and handles authentication properly.
157-161
: LGTM! Clean dataset update implementation.The
updateDataSource
method correctly uses theObjectId
dataset identifier in the API path and follows the established RPC patterns for authentication and JSON serialization.
168-172
: LGTM! Appropriate method for virtual dataset deletion.The
deleteVirtualDataset
method correctly usesObjectId
for virtual dataset deletion, distinguishing it from the existingdeleteDataSource
method that handles traditional datasets with composite identifiers.
130-133
: AllfetchPaths
calls correctly useObjectId
Verified that the only invocation ofremoteWebknossosClient.fetchPaths
inDatasetDeleter.scala
passes theObjectId
(datasetId
) and there are no remainingDataSourceId
usages. No further changes needed.app/models/dataset/WKRemoteDataStoreClient.scala (6)
24-25
: LGTM! Simplified cache key structure.The cache key simplification from
(organizationId, datasetName, layerName)
to(datasetId, layerName)
aligns with the unified dataset identification approach and reduces complexity.
37-37
: LGTM! Consistent URL construction with dataset ID.The URL construction correctly uses
dataset._id
for the dataset identifier, maintaining consistency with the new dataset ID-based routing.
60-60
: LGTM! Updated URL for layer data access.The URL construction properly uses
dataset._id
instead of organization and dataset directory components.
67-67
: LGTM! Consistent dataset ID usage in data position endpoint.The URL correctly uses
dataset._id
for the find data position endpoint.
80-89
: LGTM! Well-refactored segment index check method.The method signature correctly accepts
datasetId: String
andlayerName: String
parameters. The cache key usage(datasetId, layerName)
and URL construction withk._1
(datasetId) andk._2
(layerName) are consistent and correct.
102-102
: LGTM! Simplified cache invalidation with dataset ID.The method correctly uses a single
datasetId
parameter for cache invalidation, aligning with the unified identification approach.webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/tracings/RemoteFallbackLayer.scala (3)
18-18
: LGTM! Simplified case class with unified dataset identification.The
RemoteFallbackLayer
case class correctly uses a singledatasetId: String
parameter instead of separate organization and dataset directory fields, aligning with the system-wide migration to unified dataset identification.
21-24
: LGTM! Method renamed and updated appropriately.The method
fromDataLayerAndDatasetId
is well-named and correctly constructs theRemoteFallbackLayer
with the provideddatasetId
. The logic remains sound with proper error handling viaBox
.
37-38
: LGTM! Updated to use dataset ID for annotation.The method correctly calls
getDatasetIdForAnnotation
and uses the returneddatasetId
to construct theRemoteFallbackLayer
, maintaining consistency with the dataset ID migration.webknossos-datastore/conf/datastore.versioned.routes (4)
3-4
: LGTM! Proper API versioning for dataset ID migration.The addition of
/v10/
routing todatastore.latest.Routes
correctly supports the new dataset ID-based API while maintaining/v9/
for backward compatibility.
6-18
: LGTM! Comprehensive legacy endpoint coverage.The legacy routes properly cover all essential data access operations:
- Image data reading (POST/GET variants)
- Thumbnails
- Data position finding
- Histograms
- Raw bucket access
- Ad-hoc mesh generation
- Mapping JSON retrieval
All routes correctly use
organizationId
anddatasetDirectoryName
parameters for backward compatibility.
20-33
: LGTM! Complete Zarr v2 compatibility routes.The Zarr v2 routes comprehensively cover:
- Directory contents and metadata
- Group information (
.zgroup
)- Data source properties
- Layer-specific operations
- Array metadata (
.zarray
)- Raw cube data access
All routes properly maintain the legacy parameter structure.
34-45
: LGTM! Experimental Zarr v3 support.The Zarr v3 experimental routes provide forward-looking compatibility with:
- Zarr v3 metadata (
zarr.json
)- Updated directory and group structures
- Proper versioning distinction from v2
The experimental nature is appropriately indicated in the path prefix.
webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/mesh/DSFullMeshService.scala (3)
55-62
: LGTM! Streamlined method signature with direct object parameters.The updated
loadFor
method correctly acceptsDataSource
andDataLayer
objects directly, eliminating redundant repository lookups. This design improves performance and aligns with the controller layer changes that resolve datasets via cache before calling this method.
64-66
: LGTM! Consistent parameter update for ad-hoc meshing.The
loadFullMeshFromAdHoc
method signature correctly mirrors theloadFor
method changes, accepting resolvedDataSource
andDataLayer
objects directly.
114-117
: LGTM! Consistent parameter update for mesh file loading.The
loadFullMeshFromMeshFile
method signature is properly updated to acceptDataSource
andDataLayer
objects directly, maintaining consistency with the other method updates in this service.app/controllers/WKRemoteDataStoreController.scala (4)
245-256
: LGTM!The implementation correctly validates the dataset ID, ensures the dataset exists and is virtual before performing a soft delete.
283-310
: Implementation looks good!The method properly validates access, authenticates the user, checks organization membership, and creates the virtual dataset with appropriate parameters.
311-325
: LGTM!The method correctly validates the dataset existence and updates the datasource with proper abstraction layer conversion.
257-261
: Good refactoring to use dataset ID!The change from organization/directory name to dataset ID simplifies the API and aligns with the unified dataset identification approach.
app/models/dataset/DatasetService.scala (3)
103-124
: Well-structured virtual dataset creation!The method properly validates dataset name uniqueness, folder access permissions, and organization membership before creating the virtual dataset. The sequential validation using Fox monad ensures proper error propagation.
290-296
: Important fix for virtual dataset handling!Good addition to exclude virtual datasets from deactivation. Since virtual datasets don't exist on disk, they shouldn't be marked as unreported when the datastore reports its datasets.
366-377
: Proper WKW layer support implementation!The code now correctly instantiates WKWDataLayer and WKWSegmentationLayer objects instead of throwing NotImplementedError, enabling full WKW format support.
Also applies to: 458-470
webknossos-datastore/app/com/scalableminds/webknossos/datastore/dataformats/layers/WKWDataLayers.scala (1)
60-98
: Excellent backward-compatible JSON serialization!The custom Format implementation properly handles both new
mags
and legacywkwResolutions
fields, ensuring smooth migration without breaking existing data.app/models/dataset/ComposeService.scala (1)
77-90
: LGTM!The composability check correctly ensures all datasets are on the same data store. The comment about future virtual dataset capabilities is helpful.
webknossos-datastore/app/com/scalableminds/webknossos/datastore/controllers/DSMeshController.scala (2)
23-24
: Good migration to dataset cache!Replacing
dataSourceRepository
withdatasetCache
improves performance and aligns with the unified dataset ID approach.
33-93
: Excellent consistent refactoring!All methods have been properly updated to use the unified dataset ID approach. The consistent use of
datasetCache.getWithLayer
andUserAccessRequest.readDataset
simplifies the codebase.conf/webknossos.latest.routes (5)
96-96
: LGTM! Well-structured route for dataset composition.The new POST route follows RESTful conventions and aligns with the PR's virtual dataset feature implementation.
112-112
: Good refactor to unify dataset identification.The change from composite
(organizationId, directoryName)
to singledatasetId: ObjectId
simplifies the API and improves type safety.
114-115
: Smart approach for backward compatibility.The dual identification approach (organizationId/directoryName for registration, datasetId for updates) provides a smooth migration path.
121-121
: Clear separation of virtual dataset deletion logic.Having a dedicated endpoint for virtual dataset deletion is appropriate given their different lifecycle management compared to disk-based datasets.
136-138
: Consistent refactoring of tracingstore routes.The route renaming and parameter changes to use
datasetId: ObjectId
maintain consistency with the overall dataset identification unification.webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/ZarrStreamingService.scala (1)
22-31
: Well-structured service with focused dependencies.The service properly uses dependency injection and extends appropriate traits for Zarr output and Fox monadic operations.
webknossos-datastore/conf/datastore.latest.routes (3)
7-12
: Consistent refactoring of binary data routes.All binary data routes have been successfully unified to use
datasetId: ObjectId
, simplifying the API and improving type safety.
14-100
: Comprehensive and consistent route parameter unification.All data access routes have been systematically updated to use
datasetId: ObjectId
. This includes Knossos compatibility, Zarr streaming, segmentation mappings, agglomerates, meshes, connectomes, and segment indices.
111-123
: Datasource management routes properly updated.The routes have been correctly updated to use
datasetId: ObjectId
. The reload trigger appropriately maintainsorganizationId
for compatibility while adding the newdatasetId
parameter.webknossos-datastore/app/com/scalableminds/webknossos/datastore/controllers/BinaryDataController.scala (3)
35-37
: Good architectural improvement using datasetCache.Replacing
dataSourceRepository
withdatasetCache
improves performance through caching while maintaining the same functionality.
55-74
: Properly refactored to use dataset ID-based access.The method correctly uses
datasetCache.getById
and updates access validation toreadDataset(datasetId)
while maintaining all error handling and logging functionality.
79-276
: Excellent consistency in refactoring all controller methods.All methods have been systematically updated with:
datasetId: ObjectId
parametersdatasetCache.getById
orgetWithLayer
for data retrievalreadDataset(datasetId)
for access validation- Preserved error handling and response formatting
The refactoring maintains backward compatibility while modernizing the API.
webknossos-datastore/app/com/scalableminds/webknossos/datastore/controllers/LegacyController.scala (2)
63-89
: Well-designed legacy controller for backward compatibility.This controller provides a clean separation between the new dataset ID-based API and legacy endpoints, allowing for a smooth migration path for API clients.
357-567
: Complete legacy support for Zarr and mesh endpoints.The Zarr streaming and mesh endpoints maintain full backward compatibility with the original API signatures, ensuring existing integrations continue to work.
webknossos-datastore/app/com/scalableminds/webknossos/datastore/controllers/ZarrStreamingController.scala (5)
23-29
: LGTM! Clean dependency injection and service delegation.The constructor properly injects the new
datasetCache
andzarrStreamingService
dependencies, following the established pattern of centralizing dataset access and delegating Zarr-specific logic to a dedicated service.
43-54
: Good refactoring to unified dataset identification.The method correctly uses the new dataset ID pattern and properly delegates Zarr header generation to the service layer.
118-128
: Consistent refactoring with proper service delegation.The method maintains the same functionality while adopting the unified dataset ID approach and delegating Zarr conversion to the service layer.
81-86
: Private link methods properly updated for dataset ID usage.The methods correctly use
datasetCache.getWithLayer
with the new dataset ID pattern while maintaining the dual support for annotation layers and regular data layers.Also applies to: 103-110
317-335
: Well-structured directory listing functionality.The new directory listing methods provide a clean interface for browsing Zarr dataset structure. They consistently use the dataset ID pattern and properly delegate to the service layer for content generation.
Also applies to: 365-383, 416-432
frontend/javascripts/admin/rest_api.ts (6)
843-851
: Consistent dataset ID usage in segment index check.The function correctly uses the dataset ID in the URL construction, maintaining the same functionality with simplified parameters.
976-995
: API methods properly updated for dataset ID usage.The
readDatasetDatasource
andupdateDatasetDatasource
functions correctly use the dataset ID in URL construction and maintain backward compatibility by preserving the datasource structure.
1104-1122
: Good handling of dataset composition with backward compatibility.The function correctly maps the new dataset ID format while maintaining compatibility. The comment clearly explains the transformation for backend parsing.
1325-1357
: Cache management functions properly simplified.The cache clearing and deletion functions correctly use dataset IDs, removing the need for complex URL construction with organization and directory names.
Also applies to: 1365-1370
1389-1406
: Layer operation APIs consistently updated.All layer-related functions (
findDataPositionForLayer
,getHistogramForLayer
,getMappingsForDatasetLayer
,fetchMapping
) properly use dataset IDs in their URL construction, maintaining a consistent API pattern.Also applies to: 1424-1435, 1437-1461
1966-1982
: Specialized data access APIs properly migrated.All mesh, agglomerate, and connectome-related functions consistently use dataset IDs in their API calls, maintaining the same functionality with simplified parameters.
Also applies to: 1984-2017, 2082-2100, 2103-2213
webknossos-datastore/app/com/scalableminds/webknossos/datastore/controllers/DataSourceController.scala (5)
68-74
: Clean refactoring of data source reading.The method properly uses the new dataset ID pattern with centralized cache access.
350-364
: Good handling of hybrid storage during transition.The update method correctly handles both disk-based and remote datasets during the migration period. The comment clearly explains why this dual path is necessary.
409-427
: Cache clearing properly adapted for new architecture.The method correctly uses the provided DataSource object instead of reloading from disk, which aligns with the centralized cache approach.
441-461
: Deletion logic properly handles hybrid storage.The method correctly determines whether to delete from disk or remotely, maintaining compatibility during the transition to virtual datasets.
244-257
: Listing methods consistently updated.All data listing methods properly use the dataset ID pattern with centralized cache access and maintain consistent error handling.
Also applies to: 259-269, 463-471
private def convertLayerToZarrLayer(layer: DataLayer, zarrVersion: Int): ZarrLayer = { | ||
val dataFormat = if (zarrVersion == 2) DataFormat.zarr else DataFormat.zarr3 | ||
layer match { | ||
case s: SegmentationLayer => | ||
val rank = s.additionalAxes.map(_.length).getOrElse(0) + 4 | ||
ZarrSegmentationLayer( | ||
s.name, | ||
s.boundingBox, | ||
s.elementClass, | ||
mags = s.sortedMags.map( | ||
m => | ||
MagLocator(m, | ||
Some(s"./${s.name}/${m.toMagLiteral(allowScalar = true)}"), | ||
None, | ||
Some(AxisOrder.cAdditionalxyz(rank)), | ||
None, | ||
None)), | ||
mappings = s.mappings, | ||
largestSegmentId = s.largestSegmentId, | ||
numChannels = Some(if (s.elementClass == ElementClass.uint24) 3 else 1), | ||
defaultViewConfiguration = s.defaultViewConfiguration, | ||
adminViewConfiguration = s.adminViewConfiguration, | ||
coordinateTransformations = s.coordinateTransformations, | ||
additionalAxes = s.additionalAxes.map(reorderAdditionalAxes), | ||
dataFormat = dataFormat | ||
) | ||
case d: DataLayer => | ||
val rank = d.additionalAxes.map(_.length).getOrElse(0) + 4 | ||
ZarrDataLayer( | ||
d.name, | ||
d.category, | ||
d.boundingBox, | ||
d.elementClass, | ||
mags = d.sortedMags.map( | ||
m => | ||
MagLocator(m, | ||
Some(s"./${d.name}/${m.toMagLiteral(allowScalar = true)}"), | ||
None, | ||
Some(AxisOrder.cAdditionalxyz(rank)), | ||
None, | ||
None)), | ||
numChannels = Some(if (d.elementClass == ElementClass.uint24) 3 else 1), | ||
defaultViewConfiguration = d.defaultViewConfiguration, | ||
adminViewConfiguration = d.adminViewConfiguration, | ||
coordinateTransformations = d.coordinateTransformations, | ||
additionalAxes = d.additionalAxes.map(reorderAdditionalAxes), | ||
dataFormat = dataFormat | ||
) | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Refactor to eliminate code duplication.
The convertLayerToZarrLayer
method contains significant duplication between the SegmentationLayer and DataLayer cases. Consider extracting common logic:
-private def convertLayerToZarrLayer(layer: DataLayer, zarrVersion: Int): ZarrLayer = {
+private def convertLayerToZarrLayer(layer: DataLayer, zarrVersion: Int): ZarrLayer = {
val dataFormat = if (zarrVersion == 2) DataFormat.zarr else DataFormat.zarr3
+ val rank = layer.additionalAxes.map(_.length).getOrElse(0) + 4
+ val numChannels = Some(if (layer.elementClass == ElementClass.uint24) 3 else 1)
+ val mags = layer.sortedMags.map(m =>
+ MagLocator(m,
+ Some(s"./${layer.name}/${m.toMagLiteral(allowScalar = true)}"),
+ None,
+ Some(AxisOrder.cAdditionalxyz(rank)),
+ None,
+ None))
+ val reorderedAxes = layer.additionalAxes.map(reorderAdditionalAxes)
+
layer match {
case s: SegmentationLayer =>
- val rank = s.additionalAxes.map(_.length).getOrElse(0) + 4
ZarrSegmentationLayer(
s.name,
s.boundingBox,
s.elementClass,
- mags = s.sortedMags.map(
- m =>
- MagLocator(m,
- Some(s"./${s.name}/${m.toMagLiteral(allowScalar = true)}"),
- None,
- Some(AxisOrder.cAdditionalxyz(rank)),
- None,
- None)),
+ mags = mags,
mappings = s.mappings,
largestSegmentId = s.largestSegmentId,
- numChannels = Some(if (s.elementClass == ElementClass.uint24) 3 else 1),
+ numChannels = numChannels,
defaultViewConfiguration = s.defaultViewConfiguration,
adminViewConfiguration = s.adminViewConfiguration,
coordinateTransformations = s.coordinateTransformations,
- additionalAxes = s.additionalAxes.map(reorderAdditionalAxes),
+ additionalAxes = reorderedAxes,
dataFormat = dataFormat
)
case d: DataLayer =>
- val rank = d.additionalAxes.map(_.length).getOrElse(0) + 4
ZarrDataLayer(
d.name,
d.category,
d.boundingBox,
d.elementClass,
- mags = d.sortedMags.map(
- m =>
- MagLocator(m,
- Some(s"./${d.name}/${m.toMagLiteral(allowScalar = true)}"),
- None,
- Some(AxisOrder.cAdditionalxyz(rank)),
- None,
- None)),
- numChannels = Some(if (d.elementClass == ElementClass.uint24) 3 else 1),
+ mags = mags,
+ numChannels = numChannels,
defaultViewConfiguration = d.defaultViewConfiguration,
adminViewConfiguration = d.adminViewConfiguration,
coordinateTransformations = d.coordinateTransformations,
- additionalAxes = d.additionalAxes.map(reorderAdditionalAxes),
+ additionalAxes = reorderedAxes,
dataFormat = dataFormat
)
}
}
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
private def convertLayerToZarrLayer(layer: DataLayer, zarrVersion: Int): ZarrLayer = { | |
val dataFormat = if (zarrVersion == 2) DataFormat.zarr else DataFormat.zarr3 | |
layer match { | |
case s: SegmentationLayer => | |
val rank = s.additionalAxes.map(_.length).getOrElse(0) + 4 | |
ZarrSegmentationLayer( | |
s.name, | |
s.boundingBox, | |
s.elementClass, | |
mags = s.sortedMags.map( | |
m => | |
MagLocator(m, | |
Some(s"./${s.name}/${m.toMagLiteral(allowScalar = true)}"), | |
None, | |
Some(AxisOrder.cAdditionalxyz(rank)), | |
None, | |
None)), | |
mappings = s.mappings, | |
largestSegmentId = s.largestSegmentId, | |
numChannels = Some(if (s.elementClass == ElementClass.uint24) 3 else 1), | |
defaultViewConfiguration = s.defaultViewConfiguration, | |
adminViewConfiguration = s.adminViewConfiguration, | |
coordinateTransformations = s.coordinateTransformations, | |
additionalAxes = s.additionalAxes.map(reorderAdditionalAxes), | |
dataFormat = dataFormat | |
) | |
case d: DataLayer => | |
val rank = d.additionalAxes.map(_.length).getOrElse(0) + 4 | |
ZarrDataLayer( | |
d.name, | |
d.category, | |
d.boundingBox, | |
d.elementClass, | |
mags = d.sortedMags.map( | |
m => | |
MagLocator(m, | |
Some(s"./${d.name}/${m.toMagLiteral(allowScalar = true)}"), | |
None, | |
Some(AxisOrder.cAdditionalxyz(rank)), | |
None, | |
None)), | |
numChannels = Some(if (d.elementClass == ElementClass.uint24) 3 else 1), | |
defaultViewConfiguration = d.defaultViewConfiguration, | |
adminViewConfiguration = d.adminViewConfiguration, | |
coordinateTransformations = d.coordinateTransformations, | |
additionalAxes = d.additionalAxes.map(reorderAdditionalAxes), | |
dataFormat = dataFormat | |
) | |
} | |
} | |
private def convertLayerToZarrLayer(layer: DataLayer, zarrVersion: Int): ZarrLayer = { | |
val dataFormat = if (zarrVersion == 2) DataFormat.zarr else DataFormat.zarr3 | |
val rank = layer.additionalAxes.map(_.length).getOrElse(0) + 4 | |
val numChannels = Some(if (layer.elementClass == ElementClass.uint24) 3 else 1) | |
val mags = layer.sortedMags.map(m => | |
MagLocator( | |
m, | |
Some(s"./${layer.name}/${m.toMagLiteral(allowScalar = true)}"), | |
None, | |
Some(AxisOrder.cAdditionalxyz(rank)), | |
None, | |
None | |
) | |
) | |
val reorderedAxes = layer.additionalAxes.map(reorderAdditionalAxes) | |
layer match { | |
case s: SegmentationLayer => | |
ZarrSegmentationLayer( | |
s.name, | |
s.boundingBox, | |
s.elementClass, | |
mags = mags, | |
mappings = s.mappings, | |
largestSegmentId = s.largestSegmentId, | |
numChannels = numChannels, | |
defaultViewConfiguration = s.defaultViewConfiguration, | |
adminViewConfiguration = s.adminViewConfiguration, | |
coordinateTransformations = s.coordinateTransformations, | |
additionalAxes = reorderedAxes, | |
dataFormat = dataFormat | |
) | |
case d: DataLayer => | |
ZarrDataLayer( | |
d.name, | |
d.category, | |
d.boundingBox, | |
d.elementClass, | |
mags = mags, | |
numChannels = numChannels, | |
defaultViewConfiguration = d.defaultViewConfiguration, | |
adminViewConfiguration = d.adminViewConfiguration, | |
coordinateTransformations = d.coordinateTransformations, | |
additionalAxes = reorderedAxes, | |
dataFormat = dataFormat | |
) | |
} | |
} |
🤖 Prompt for AI Agents
In
webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/ZarrStreamingService.scala
between lines 63 and 112, the convertLayerToZarrLayer method duplicates code for
SegmentationLayer and DataLayer cases. Refactor by extracting the common logic
such as rank calculation, MagLocator creation, numChannels determination, and
common parameters into shared helper functions or variables. Then use these
shared components in both cases to reduce duplication and improve
maintainability.
@MichaelBuessemeyer Maybe you could look at the frontend changes? Not sure to assign you as a reviewer here, since I would also be happy if you see something you just change it since I do not know about the frontend anyway |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (1)
app/models/dataset/ComposeService.scala (1)
82-82
: Good - empty layer validation implemented.The validation for non-empty layers has been properly implemented, addressing the previous review comment.
🧹 Nitpick comments (1)
app/models/dataset/ComposeService.scala (1)
73-73
: Improve error message for unsupported layer types.The error message could be more informative by specifying which layer types are supported.
- case _ => Fox.failure("Unsupported layer type for composition: " + layer.getClass.getSimpleName) + case _ => Fox.failure(s"Unsupported layer type for composition: ${layer.getClass.getSimpleName}. Only DataLayerWithMagLocators is supported.")
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
app/models/dataset/ComposeService.scala
(1 hunks)webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/TSRemoteDatastoreClient.scala
(2 hunks)webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/TSRemoteWebknossosClient.scala
(3 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
- webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/TSRemoteWebknossosClient.scala
- webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/TSRemoteDatastoreClient.scala
🧰 Additional context used
🧠 Learnings (1)
app/models/dataset/ComposeService.scala (2)
Learnt from: frcroth
PR: #8609
File: app/models/dataset/Dataset.scala:753-775
Timestamp: 2025-05-12T13:07:29.637Z
Learning: In the updateMags
method of DatasetMagsDAO (Scala), the code handles different dataset types distinctly:
- Non-WKW datasets have
magsOpt
populated and use the first branch which includes axisOrder, channelIndex, and credentialId. - WKW datasets will have
wkwResolutionsOpt
populated and use the second branch which includes cubeLength. - The final branch is a fallback for legacy data.
This ensures appropriate fields are populated for each dataset type.
Learnt from: frcroth
PR: #8236
File: webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/mesh/NeuroglancerPrecomputedMeshFileService.scala:161-166
Timestamp: 2025-04-28T14:18:04.368Z
Learning: In Scala for-comprehensions with the Fox error handling monad, Fox.fromBool()
expressions should use the <-
binding operator instead of the =
assignment operator to properly propagate error conditions. Using =
will cause validation failures to be silently ignored.
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: build-smoketest-push
- GitHub Check: frontend-tests
- GitHub Check: backend-tests
🔇 Additional comments (2)
app/models/dataset/ComposeService.scala (2)
40-53
: LGTM!The method properly orchestrates the dataset composition flow with appropriate error handling.
91-102
: LGTM!The datasource creation logic is clean and correct.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here are some TODO notes for me regarding the frontend changes
@@ -1106,21 +1102,20 @@ type DatasetCompositionArgs = { | |||
}; | |||
|
|||
export function createDatasetComposition( | |||
datastoreUrl: string, | |||
payload: DatasetCompositionArgs, | |||
): Promise<NewDatasetReply> { | |||
// Formatting the dataSourceId to the old format so that the backend can parse it. | |||
// And removing the datasetId as the datastore cannot use it. | |||
const updatedLayers = payload.layers.map(({ dataSourceId, datasetId, ...rest }) => ({ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
TODO: Clean up; The frontend DatasetCompositionArgs
does no longer need dataSourceId
.
if ("wkwResolutions" in layer) { | ||
return layer.wkwResolutions.map((res) => res.resolution); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
TODO: Check if the format for wkw datasets was adapted accordingly
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
TODO be careful due to incoming merge conflicts
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great stuff, thanks for pushing this through!
I think it is already very close to where we want to go :-) Please have a look at my comments below.
I think one thing that we need to be careful about is the changed format written for WKW datasets (mags instead of wkwResolutions). We need to make sure that if we write datasource-properties.json files like that, it doesn’t break important workflows. Since we don’t keep the cubeLength
, we cannot simply write out the old format. So this probably means we need to adapt downstream clients.
If I understand correctly, virtual and composed datasets still have a datasetDirectoryName. What is this value then? Is it used for something? Could it become optional and be None for those datasets? (Can be a follow-up, though)
Also, in a follow-up we might want to make the datastore property optional as well, so that any datastore can serve the remote data (e.g. one deployed in the region of the user). Do you think there are important spots still that rely on there being a specific datastore registered for virtual datasets?
Also, I wonder if we could get rid of the stateful DataSourceRepository entirely, and have the checkInbox function report to wk directly). Everything that is available from it should also be readable using the DatasetCache, right?
unreleased_changes/8708.md
Outdated
@@ -0,0 +1,5 @@ | |||
### Changed | |||
- A new directory is no longer created for new remote datasets. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please also mention the new api version so that adapting api clients (especially in the libs) is pointed out.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also, am I seeing it correctly that wkw datasets might now also be written with the mags
key now, instead of wkwResolutions
? That might also be a change worth mentioning here, and we do indeed need to make sure that the libs can read wkw datasets written like that.
Action.async(validateJson[String]) { implicit request => | ||
dataStoreService.validateAccess(name, key) { _ => | ||
for { | ||
datasetIdValidated <- ObjectId.fromString(request.body) ?~> "dataset.delete.invalidId" ~> BAD_REQUEST |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think you can use Action[ObjectId] and validateJson[ObjectId] directly
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also, it might make sense to turn around the deletion protocol so that the frontend talks to the wk side, and wk checks if isVirtual==false and if so asks the datastore to also delete on disk. But that could be a follow-up issue. What do you think?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes that would make more sense
@@ -270,6 +280,49 @@ class WKRemoteDataStoreController @Inject()( | |||
|
|||
} | |||
|
|||
// Register a datasource from the datastore as a dataset in the database. | |||
// This is called when adding remote virtual datasets (that should only exist in the database) | |||
// by the data store after exploration. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here, too, I wonder if the datastore needs to be involved in this communication at all 🤔
organization <- organizationDAO.findOne(dataset._organization) | ||
} yield Ok(Json.toJson(DataSourceId(dataset.directoryName, organization._id))) | ||
dataset <- datasetDAO.findOne(annotation._dataset) ?~> "dataset.notFound" | ||
} yield Ok(Json.toJson(dataset._id.toString)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
} yield Ok(Json.toJson(dataset._id.toString)) | |
} yield Ok(Json.toJson(dataset._id)) |
(I think an ObjectId can be passed to toJson directly)
def hasSegmentIndexFile(organizationId: String, datasetName: String, layerName: String)( | ||
implicit ec: ExecutionContext): Fox[Boolean] = { | ||
val cacheKey = (organizationId, datasetName, layerName) | ||
def hasSegmentIndexFile(datasetId: String, layerName: String)(implicit ec: ExecutionContext): Fox[Boolean] = { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could we use ObjectIds here directly? (Also in the cache key)
userToken <- bearerTokenService.createAndInitDataStoreTokenForUser(user) | ||
_ <- client.addDataSource(organization._id, datasetName, dataSource, folderId, userToken) | ||
datasetId <- datasetService.createVirtualDataset( | ||
dataSource.id.directoryName, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Conceptual question: What do we expect to be the directoryName for a remote dataset?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If everything was virtual, we would not need it. But for virtual datasets, we could use it as a URL safe name (for notion style routes). That's what it is, derived from the name of the dataset.
case class ComposeRequest( | ||
newDatasetName: String, | ||
targetFolderId: String, | ||
organizationId: String, | ||
voxelSize: VoxelSize, | ||
layers: Seq[ComposeRequestLayer] | ||
) | ||
|
||
object ComposeRequest { | ||
implicit val composeRequestFormat: OFormat[ComposeRequest] = Json.format[ComposeRequest] | ||
} | ||
case class ComposeRequestLayer( | ||
datasetId: String, | ||
sourceName: String, | ||
newName: String, | ||
transformations: Seq[CoordinateTransformation] | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could we make the diverse ids here ObjectIds?
def composeDataset(composeRequest: ComposeRequest, user: User)( | ||
implicit ctx: DBAccessContext): Fox[(DataSource, ObjectId)] = | ||
for { | ||
_ <- isComposable(composeRequest) ?~> "Datasets are not composable, they are not on the same data store" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The result boolean isn’t used here. I think we need something like Fox.assertTrue(isComposable(composeRequest))
, Alternatively it could be assertComposable: Fox[Unit] which returns Fox.failure if dataStores.distinct.size
isn’t 1.
for { | ||
datasetIdValidated <- ObjectId.fromString(composeLayer.datasetId) ?~> "Invalid dataset ID" | ||
dataset <- datasetDAO.findOne(datasetIdValidated) ?~> "Dataset not found" | ||
ds <- datasetService.fullDataSourceFor(dataset) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I’d prefer to write out dataSource, because ds also fits dataset, which could lead to ambiguity.
This was not introduced here but I think it would be a good change now that we are here.
datasetIdValidated <- ObjectId.fromString(composeLayer.datasetId) ?~> "Invalid dataset ID" | ||
dataset <- datasetDAO.findOne(datasetIdValidated) ?~> "Dataset not found" | ||
ds <- datasetService.fullDataSourceFor(dataset) | ||
ds <- ds.toUsable.toFox ?~> "Dataset not usable" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
And here maybe usableDataSource, to avoid shadowing the variable.
@fm3 I think I handled all comments except #8708 (comment). If you want to take a look already, do so, otherwise I will continue working on it on Monday. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
♻️ Duplicate comments (3)
unreleased_changes/8708.md (1)
4-8
: Include API version and complete migration details.Based on the changes across the codebase, the changelog is missing important details that were previously requested:
- The new API version 10 is not mentioned
- The change from
wkwResolutions
tomags
for WKW datasets is not documentedApply this diff to provide complete migration information:
### Breaking Changes - Datasets are now referenced by their ID instead of the combination of the organization and the directory name. - Newly uploaded datasets do not use the `wkwResolutions` property anymore, but instead use `mags` like imported remote datasets. -- Interacting with newly created datasets requires libraries to support API version of 10 or higher. +- Interacting with newly created datasets requires libraries to support API version 10 or higher. +- WKW datasets may now be written with the `mags` key instead of `wkwResolutions`. Client libraries must be updated to handle both formats when reading WKW datasets.app/models/dataset/ComposeService.scala (1)
55-74
: Improve variable naming for clarity.While the implementation is solid, the variable naming could be clearer to avoid ambiguity:
Apply this diff to improve readability:
private def getLayerFromComposeLayer(composeLayer: ComposeRequestLayer)( implicit ctx: DBAccessContext): Fox[DataLayer] = for { dataset <- datasetDAO.findOne(composeLayer.datasetId) ?~> "Dataset not found" - dataSource <- datasetService.fullDataSourceFor(dataset) - usableDataSource <- dataSource.toUsable.toFox ?~> "Dataset not usable" + fullDataSource <- datasetService.fullDataSourceFor(dataset) + usableDataSource <- fullDataSource.toUsable.toFox ?~> "Dataset not usable" layer <- usableDataSource.dataLayers.find(_.name == composeLayer.sourceName).toFoxwebknossos-datastore/app/com/scalableminds/webknossos/datastore/controllers/LegacyController.scala (1)
63-79
: Consider the architectural approach for legacy API compatibility.This controller duplicates significant functionality from other controllers to maintain v9 API compatibility. While functional, this approach creates substantial code duplication and maintenance burden.
The past review comment from fm3 suggests exploring a route delegation approach where the datastore could call WK to get a dataset ID from directoryName+organizationName, then delegate to new implementations. This could significantly reduce code duplication.
Consider implementing this suggested delegation pattern in a follow-up to reduce maintenance overhead.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (24)
app/controllers/AnnotationIOController.scala
(2 hunks)app/controllers/WKRemoteDataStoreController.scala
(3 hunks)app/controllers/WKRemoteTracingStoreController.scala
(2 hunks)app/models/annotation/AnnotationService.scala
(4 hunks)app/models/dataset/ComposeService.scala
(1 hunks)app/models/dataset/Dataset.scala
(6 hunks)app/models/dataset/DatasetService.scala
(12 hunks)app/models/dataset/WKRemoteDataStoreClient.scala
(5 hunks)app/models/dataset/explore/WKExploreRemoteLayerService.scala
(1 hunks)conf/evolutions/137-virtual-datasets.sql
(1 hunks)conf/evolutions/reversions/137-virtual-datasets.sql
(1 hunks)tools/postgres/schema.sql
(2 hunks)unreleased_changes/8708.md
(1 hunks)webknossos-datastore/app/com/scalableminds/webknossos/datastore/controllers/BinaryDataController.scala
(10 hunks)webknossos-datastore/app/com/scalableminds/webknossos/datastore/controllers/DataSourceController.scala
(18 hunks)webknossos-datastore/app/com/scalableminds/webknossos/datastore/controllers/LegacyController.scala
(1 hunks)webknossos-datastore/app/com/scalableminds/webknossos/datastore/controllers/ZarrStreamingController.scala
(12 hunks)webknossos-datastore/app/com/scalableminds/webknossos/datastore/dataformats/layers/WKWDataLayers.scala
(4 hunks)webknossos-datastore/app/com/scalableminds/webknossos/datastore/helpers/DatasetDeleter.scala
(4 hunks)webknossos-datastore/app/com/scalableminds/webknossos/datastore/models/datasource/DataLayer.scala
(3 hunks)webknossos-datastore/conf/datastore.versioned.routes
(1 hunks)webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/TSRemoteDatastoreClient.scala
(2 hunks)webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/TSRemoteWebknossosClient.scala
(3 hunks)webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/tracings/RemoteFallbackLayer.scala
(2 hunks)
🧠 Learnings (8)
📓 Common learnings
Learnt from: frcroth
PR: scalableminds/webknossos#8609
File: app/models/dataset/Dataset.scala:753-775
Timestamp: 2025-05-12T13:07:29.637Z
Learning: In the `updateMags` method of DatasetMagsDAO (Scala), the code handles different dataset types distinctly:
1. Non-WKW datasets have `magsOpt` populated and use the first branch which includes axisOrder, channelIndex, and credentialId.
2. WKW datasets will have `wkwResolutionsOpt` populated and use the second branch which includes cubeLength.
3. The final branch is a fallback for legacy data.
This ensures appropriate fields are populated for each dataset type.
unreleased_changes/8708.md (4)
Learnt from: frcroth
PR: #8609
File: app/models/dataset/Dataset.scala:753-775
Timestamp: 2025-05-12T13:07:29.637Z
Learning: In the updateMags
method of DatasetMagsDAO (Scala), the code handles different dataset types distinctly:
- Non-WKW datasets have
magsOpt
populated and use the first branch which includes axisOrder, channelIndex, and credentialId. - WKW datasets will have
wkwResolutionsOpt
populated and use the second branch which includes cubeLength. - The final branch is a fallback for legacy data.
This ensures appropriate fields are populated for each dataset type.
Learnt from: frcroth
PR: #8609
File: conf/evolutions/133-datasource-properties-in-db.sql:8-16
Timestamp: 2025-05-12T14:15:05.259Z
Learning: The database schema in WEBKNOSSOS has separate tables for dataset layers (dataset_layers
) and magnifications (dataset_mags
). The dataFormat
field is stored in the layers table while magnification-specific fields like cubeLength
(specific to WKW format) are stored in the mags table.
Learnt from: dieknolle3333
PR: #8168
File: frontend/javascripts/oxalis/model/sagas/volumetracing_saga.tsx:433-434
Timestamp: 2024-11-22T17:19:07.947Z
Learning: In the codebase, certain usages of segmentationLayer.resolutions
are intentionally retained and should not be changed to segmentationLayer.mags
during refactoring.
Learnt from: dieknolle3333
PR: #8168
File: frontend/javascripts/oxalis/model/sagas/proofread_saga.ts:1039-1039
Timestamp: 2024-11-22T17:18:04.217Z
Learning: In frontend/javascripts/oxalis/model/sagas/proofread_saga.ts
, when calling getMagInfo
, the use of volumeTracingLayer.resolutions
is intentional and should not be changed to volumeTracingLayer.mags
.
webknossos-datastore/conf/datastore.versioned.routes (1)
Learnt from: frcroth
PR: #8609
File: app/models/dataset/Dataset.scala:753-775
Timestamp: 2025-05-12T13:07:29.637Z
Learning: In the updateMags
method of DatasetMagsDAO (Scala), the code handles different dataset types distinctly:
- Non-WKW datasets have
magsOpt
populated and use the first branch which includes axisOrder, channelIndex, and credentialId. - WKW datasets will have
wkwResolutionsOpt
populated and use the second branch which includes cubeLength. - The final branch is a fallback for legacy data.
This ensures appropriate fields are populated for each dataset type.
webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/TSRemoteDatastoreClient.scala (5)
Learnt from: frcroth
PR: #8609
File: app/models/dataset/Dataset.scala:753-775
Timestamp: 2025-05-12T13:07:29.637Z
Learning: In the updateMags
method of DatasetMagsDAO (Scala), the code handles different dataset types distinctly:
- Non-WKW datasets have
magsOpt
populated and use the first branch which includes axisOrder, channelIndex, and credentialId. - WKW datasets will have
wkwResolutionsOpt
populated and use the second branch which includes cubeLength. - The final branch is a fallback for legacy data.
This ensures appropriate fields are populated for each dataset type.
Learnt from: frcroth
PR: #8202
File: webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/DatasetErrorLoggingService.scala:68-72
Timestamp: 2024-11-25T10:02:03.702Z
Learning: In DatasetErrorLoggingService.scala
, prefer using TextUtils.stackTraceAsString(exception)
when logging exceptions instead of passing the exception directly to logger.error
.
Learnt from: MichaelBuessemeyer
PR: #8236
File: webknossos-datastore/app/com/scalableminds/webknossos/datastore/datareaders/precomputed/NeuroglancerPrecomputedShardingUtils.scala:23-28
Timestamp: 2025-04-25T11:06:13.275Z
Learning: AlfuCache in the WebKnossos codebase has default configured TTL (time-to-live) and size limits, providing automatic protection against unbounded memory growth.
Learnt from: frcroth
PR: #8236
File: webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/mesh/MeshFileService.scala:170-173
Timestamp: 2025-04-23T08:51:57.756Z
Learning: In the webknossos codebase, classes extending FoxImplicits
have access to an implicit conversion from Option[A]
to Fox[A]
, where None
is converted to an empty Fox that fails gracefully in for-comprehensions.
Learnt from: MichaelBuessemeyer
PR: #8352
File: app/models/organization/CreditTransactionService.scala:0-0
Timestamp: 2025-01-27T12:06:42.865Z
Learning: In Scala's for-comprehension with Fox (Future-like type), the <-
operator ensures sequential execution. If any step fails, the entire chain short-circuits and returns early, preventing subsequent operations from executing. This makes it safe to perform validation checks before database operations.
app/models/dataset/ComposeService.scala (4)
Learnt from: frcroth
PR: #8609
File: app/models/dataset/Dataset.scala:753-775
Timestamp: 2025-05-12T13:07:29.637Z
Learning: In the updateMags
method of DatasetMagsDAO (Scala), the code handles different dataset types distinctly:
- Non-WKW datasets have
magsOpt
populated and use the first branch which includes axisOrder, channelIndex, and credentialId. - WKW datasets will have
wkwResolutionsOpt
populated and use the second branch which includes cubeLength. - The final branch is a fallback for legacy data.
This ensures appropriate fields are populated for each dataset type.
Learnt from: frcroth
PR: #8236
File: webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/mesh/NeuroglancerPrecomputedMeshFileService.scala:161-166
Timestamp: 2025-04-28T14:18:04.368Z
Learning: In Scala for-comprehensions with the Fox error handling monad, Fox.fromBool()
expressions should use the <-
binding operator instead of the =
assignment operator to properly propagate error conditions. Using =
will cause validation failures to be silently ignored.
Learnt from: MichaelBuessemeyer
PR: #8352
File: app/models/organization/CreditTransactionService.scala:0-0
Timestamp: 2025-01-27T12:06:42.865Z
Learning: In Scala's for-comprehension with Fox (Future-like type), the <-
operator ensures sequential execution. If any step fails, the entire chain short-circuits and returns early, preventing subsequent operations from executing. This makes it safe to perform validation checks before database operations.
Learnt from: frcroth
PR: #8236
File: webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/mesh/MeshFileService.scala:170-173
Timestamp: 2025-04-23T08:51:57.756Z
Learning: In the webknossos codebase, classes extending FoxImplicits
have access to an implicit conversion from Option[A]
to Fox[A]
, where None
is converted to an empty Fox that fails gracefully in for-comprehensions.
webknossos-datastore/app/com/scalableminds/webknossos/datastore/dataformats/layers/WKWDataLayers.scala (5)
Learnt from: frcroth
PR: #8609
File: app/models/dataset/Dataset.scala:753-775
Timestamp: 2025-05-12T13:07:29.637Z
Learning: In the updateMags
method of DatasetMagsDAO (Scala), the code handles different dataset types distinctly:
- Non-WKW datasets have
magsOpt
populated and use the first branch which includes axisOrder, channelIndex, and credentialId. - WKW datasets will have
wkwResolutionsOpt
populated and use the second branch which includes cubeLength. - The final branch is a fallback for legacy data.
This ensures appropriate fields are populated for each dataset type.
Learnt from: dieknolle3333
PR: #8168
File: frontend/javascripts/oxalis/model/sagas/volumetracing_saga.tsx:433-434
Timestamp: 2024-11-22T17:19:07.947Z
Learning: In the codebase, certain usages of segmentationLayer.resolutions
are intentionally retained and should not be changed to segmentationLayer.mags
during refactoring.
Learnt from: frcroth
PR: #8609
File: conf/evolutions/133-datasource-properties-in-db.sql:8-16
Timestamp: 2025-05-12T14:15:05.259Z
Learning: The database schema in WEBKNOSSOS has separate tables for dataset layers (dataset_layers
) and magnifications (dataset_mags
). The dataFormat
field is stored in the layers table while magnification-specific fields like cubeLength
(specific to WKW format) are stored in the mags table.
Learnt from: dieknolle3333
PR: #8168
File: frontend/javascripts/oxalis/model/sagas/proofread_saga.ts:1039-1039
Timestamp: 2024-11-22T17:18:04.217Z
Learning: In frontend/javascripts/oxalis/model/sagas/proofread_saga.ts
, when calling getMagInfo
, the use of volumeTracingLayer.resolutions
is intentional and should not be changed to volumeTracingLayer.mags
.
Learnt from: dieknolle3333
PR: #8168
File: frontend/javascripts/oxalis/model/reducers/skeletontracing_reducer_helpers.ts:125-125
Timestamp: 2024-11-22T17:17:39.914Z
Learning: In the createNode
function within skeletontracing_reducer_helpers.ts
, the property resolution
is intentionally kept even when the parameter is renamed to mag
.
webknossos-datastore/app/com/scalableminds/webknossos/datastore/controllers/ZarrStreamingController.scala (1)
Learnt from: frcroth
PR: #8609
File: app/models/dataset/Dataset.scala:753-775
Timestamp: 2025-05-12T13:07:29.637Z
Learning: In the updateMags
method of DatasetMagsDAO (Scala), the code handles different dataset types distinctly:
- Non-WKW datasets have
magsOpt
populated and use the first branch which includes axisOrder, channelIndex, and credentialId. - WKW datasets will have
wkwResolutionsOpt
populated and use the second branch which includes cubeLength. - The final branch is a fallback for legacy data.
This ensures appropriate fields are populated for each dataset type.
webknossos-datastore/app/com/scalableminds/webknossos/datastore/controllers/DataSourceController.scala (3)
Learnt from: frcroth
PR: #8609
File: app/models/dataset/Dataset.scala:753-775
Timestamp: 2025-05-12T13:07:29.637Z
Learning: In the updateMags
method of DatasetMagsDAO (Scala), the code handles different dataset types distinctly:
- Non-WKW datasets have
magsOpt
populated and use the first branch which includes axisOrder, channelIndex, and credentialId. - WKW datasets will have
wkwResolutionsOpt
populated and use the second branch which includes cubeLength. - The final branch is a fallback for legacy data.
This ensures appropriate fields are populated for each dataset type.
Learnt from: MichaelBuessemeyer
PR: #8352
File: app/models/organization/CreditTransactionService.scala:0-0
Timestamp: 2025-01-27T12:06:42.865Z
Learning: In Scala's for-comprehension with Fox (Future-like type), the <-
operator ensures sequential execution. If any step fails, the entire chain short-circuits and returns early, preventing subsequent operations from executing. This makes it safe to perform validation checks before database operations.
Learnt from: frcroth
PR: #8236
File: webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/mesh/MeshFileService.scala:170-173
Timestamp: 2025-04-23T08:51:57.756Z
Learning: In the webknossos codebase, classes extending FoxImplicits
have access to an implicit conversion from Option[A]
to Fox[A]
, where None
is converted to an empty Fox that fails gracefully in for-comprehensions.
🧬 Code Graph Analysis (1)
webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/TSRemoteDatastoreClient.scala (4)
util/src/main/scala/com/scalableminds/util/cache/AlfuCache.scala (3)
AlfuCache
(15-42)AlfuCache
(44-78)getOrLoad
(17-27)webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/TSRemoteWebknossosClient.scala (2)
getDatasetIdForAnnotation
(77-88)getDataStoreUriForDataset
(71-77)util/src/main/scala/com/scalableminds/util/tools/Fox.scala (3)
s
(229-233)s
(233-243)toFox
(14-14)webknossos-datastore/app/com/scalableminds/webknossos/datastore/rpc/RPCRequest.scala (2)
withTokenFromContext
(33-36)getWithJsonResponse
(100-103)
🚧 Files skipped from review as they are similar to previous changes (16)
- tools/postgres/schema.sql
- conf/evolutions/reversions/137-virtual-datasets.sql
- app/controllers/AnnotationIOController.scala
- app/models/dataset/explore/WKExploreRemoteLayerService.scala
- app/controllers/WKRemoteTracingStoreController.scala
- app/models/annotation/AnnotationService.scala
- app/models/dataset/Dataset.scala
- webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/TSRemoteWebknossosClient.scala
- webknossos-datastore/app/com/scalableminds/webknossos/datastore/models/datasource/DataLayer.scala
- webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/tracings/RemoteFallbackLayer.scala
- app/controllers/WKRemoteDataStoreController.scala
- conf/evolutions/137-virtual-datasets.sql
- webknossos-datastore/app/com/scalableminds/webknossos/datastore/helpers/DatasetDeleter.scala
- app/models/dataset/WKRemoteDataStoreClient.scala
- app/models/dataset/DatasetService.scala
- webknossos-datastore/app/com/scalableminds/webknossos/datastore/controllers/BinaryDataController.scala
🧰 Additional context used
🧠 Learnings (8)
📓 Common learnings
Learnt from: frcroth
PR: scalableminds/webknossos#8609
File: app/models/dataset/Dataset.scala:753-775
Timestamp: 2025-05-12T13:07:29.637Z
Learning: In the `updateMags` method of DatasetMagsDAO (Scala), the code handles different dataset types distinctly:
1. Non-WKW datasets have `magsOpt` populated and use the first branch which includes axisOrder, channelIndex, and credentialId.
2. WKW datasets will have `wkwResolutionsOpt` populated and use the second branch which includes cubeLength.
3. The final branch is a fallback for legacy data.
This ensures appropriate fields are populated for each dataset type.
unreleased_changes/8708.md (4)
Learnt from: frcroth
PR: #8609
File: app/models/dataset/Dataset.scala:753-775
Timestamp: 2025-05-12T13:07:29.637Z
Learning: In the updateMags
method of DatasetMagsDAO (Scala), the code handles different dataset types distinctly:
- Non-WKW datasets have
magsOpt
populated and use the first branch which includes axisOrder, channelIndex, and credentialId. - WKW datasets will have
wkwResolutionsOpt
populated and use the second branch which includes cubeLength. - The final branch is a fallback for legacy data.
This ensures appropriate fields are populated for each dataset type.
Learnt from: frcroth
PR: #8609
File: conf/evolutions/133-datasource-properties-in-db.sql:8-16
Timestamp: 2025-05-12T14:15:05.259Z
Learning: The database schema in WEBKNOSSOS has separate tables for dataset layers (dataset_layers
) and magnifications (dataset_mags
). The dataFormat
field is stored in the layers table while magnification-specific fields like cubeLength
(specific to WKW format) are stored in the mags table.
Learnt from: dieknolle3333
PR: #8168
File: frontend/javascripts/oxalis/model/sagas/volumetracing_saga.tsx:433-434
Timestamp: 2024-11-22T17:19:07.947Z
Learning: In the codebase, certain usages of segmentationLayer.resolutions
are intentionally retained and should not be changed to segmentationLayer.mags
during refactoring.
Learnt from: dieknolle3333
PR: #8168
File: frontend/javascripts/oxalis/model/sagas/proofread_saga.ts:1039-1039
Timestamp: 2024-11-22T17:18:04.217Z
Learning: In frontend/javascripts/oxalis/model/sagas/proofread_saga.ts
, when calling getMagInfo
, the use of volumeTracingLayer.resolutions
is intentional and should not be changed to volumeTracingLayer.mags
.
webknossos-datastore/conf/datastore.versioned.routes (1)
Learnt from: frcroth
PR: #8609
File: app/models/dataset/Dataset.scala:753-775
Timestamp: 2025-05-12T13:07:29.637Z
Learning: In the updateMags
method of DatasetMagsDAO (Scala), the code handles different dataset types distinctly:
- Non-WKW datasets have
magsOpt
populated and use the first branch which includes axisOrder, channelIndex, and credentialId. - WKW datasets will have
wkwResolutionsOpt
populated and use the second branch which includes cubeLength. - The final branch is a fallback for legacy data.
This ensures appropriate fields are populated for each dataset type.
webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/TSRemoteDatastoreClient.scala (5)
Learnt from: frcroth
PR: #8609
File: app/models/dataset/Dataset.scala:753-775
Timestamp: 2025-05-12T13:07:29.637Z
Learning: In the updateMags
method of DatasetMagsDAO (Scala), the code handles different dataset types distinctly:
- Non-WKW datasets have
magsOpt
populated and use the first branch which includes axisOrder, channelIndex, and credentialId. - WKW datasets will have
wkwResolutionsOpt
populated and use the second branch which includes cubeLength. - The final branch is a fallback for legacy data.
This ensures appropriate fields are populated for each dataset type.
Learnt from: frcroth
PR: #8202
File: webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/DatasetErrorLoggingService.scala:68-72
Timestamp: 2024-11-25T10:02:03.702Z
Learning: In DatasetErrorLoggingService.scala
, prefer using TextUtils.stackTraceAsString(exception)
when logging exceptions instead of passing the exception directly to logger.error
.
Learnt from: MichaelBuessemeyer
PR: #8236
File: webknossos-datastore/app/com/scalableminds/webknossos/datastore/datareaders/precomputed/NeuroglancerPrecomputedShardingUtils.scala:23-28
Timestamp: 2025-04-25T11:06:13.275Z
Learning: AlfuCache in the WebKnossos codebase has default configured TTL (time-to-live) and size limits, providing automatic protection against unbounded memory growth.
Learnt from: frcroth
PR: #8236
File: webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/mesh/MeshFileService.scala:170-173
Timestamp: 2025-04-23T08:51:57.756Z
Learning: In the webknossos codebase, classes extending FoxImplicits
have access to an implicit conversion from Option[A]
to Fox[A]
, where None
is converted to an empty Fox that fails gracefully in for-comprehensions.
Learnt from: MichaelBuessemeyer
PR: #8352
File: app/models/organization/CreditTransactionService.scala:0-0
Timestamp: 2025-01-27T12:06:42.865Z
Learning: In Scala's for-comprehension with Fox (Future-like type), the <-
operator ensures sequential execution. If any step fails, the entire chain short-circuits and returns early, preventing subsequent operations from executing. This makes it safe to perform validation checks before database operations.
app/models/dataset/ComposeService.scala (4)
Learnt from: frcroth
PR: #8609
File: app/models/dataset/Dataset.scala:753-775
Timestamp: 2025-05-12T13:07:29.637Z
Learning: In the updateMags
method of DatasetMagsDAO (Scala), the code handles different dataset types distinctly:
- Non-WKW datasets have
magsOpt
populated and use the first branch which includes axisOrder, channelIndex, and credentialId. - WKW datasets will have
wkwResolutionsOpt
populated and use the second branch which includes cubeLength. - The final branch is a fallback for legacy data.
This ensures appropriate fields are populated for each dataset type.
Learnt from: frcroth
PR: #8236
File: webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/mesh/NeuroglancerPrecomputedMeshFileService.scala:161-166
Timestamp: 2025-04-28T14:18:04.368Z
Learning: In Scala for-comprehensions with the Fox error handling monad, Fox.fromBool()
expressions should use the <-
binding operator instead of the =
assignment operator to properly propagate error conditions. Using =
will cause validation failures to be silently ignored.
Learnt from: MichaelBuessemeyer
PR: #8352
File: app/models/organization/CreditTransactionService.scala:0-0
Timestamp: 2025-01-27T12:06:42.865Z
Learning: In Scala's for-comprehension with Fox (Future-like type), the <-
operator ensures sequential execution. If any step fails, the entire chain short-circuits and returns early, preventing subsequent operations from executing. This makes it safe to perform validation checks before database operations.
Learnt from: frcroth
PR: #8236
File: webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/mesh/MeshFileService.scala:170-173
Timestamp: 2025-04-23T08:51:57.756Z
Learning: In the webknossos codebase, classes extending FoxImplicits
have access to an implicit conversion from Option[A]
to Fox[A]
, where None
is converted to an empty Fox that fails gracefully in for-comprehensions.
webknossos-datastore/app/com/scalableminds/webknossos/datastore/dataformats/layers/WKWDataLayers.scala (5)
Learnt from: frcroth
PR: #8609
File: app/models/dataset/Dataset.scala:753-775
Timestamp: 2025-05-12T13:07:29.637Z
Learning: In the updateMags
method of DatasetMagsDAO (Scala), the code handles different dataset types distinctly:
- Non-WKW datasets have
magsOpt
populated and use the first branch which includes axisOrder, channelIndex, and credentialId. - WKW datasets will have
wkwResolutionsOpt
populated and use the second branch which includes cubeLength. - The final branch is a fallback for legacy data.
This ensures appropriate fields are populated for each dataset type.
Learnt from: dieknolle3333
PR: #8168
File: frontend/javascripts/oxalis/model/sagas/volumetracing_saga.tsx:433-434
Timestamp: 2024-11-22T17:19:07.947Z
Learning: In the codebase, certain usages of segmentationLayer.resolutions
are intentionally retained and should not be changed to segmentationLayer.mags
during refactoring.
Learnt from: frcroth
PR: #8609
File: conf/evolutions/133-datasource-properties-in-db.sql:8-16
Timestamp: 2025-05-12T14:15:05.259Z
Learning: The database schema in WEBKNOSSOS has separate tables for dataset layers (dataset_layers
) and magnifications (dataset_mags
). The dataFormat
field is stored in the layers table while magnification-specific fields like cubeLength
(specific to WKW format) are stored in the mags table.
Learnt from: dieknolle3333
PR: #8168
File: frontend/javascripts/oxalis/model/sagas/proofread_saga.ts:1039-1039
Timestamp: 2024-11-22T17:18:04.217Z
Learning: In frontend/javascripts/oxalis/model/sagas/proofread_saga.ts
, when calling getMagInfo
, the use of volumeTracingLayer.resolutions
is intentional and should not be changed to volumeTracingLayer.mags
.
Learnt from: dieknolle3333
PR: #8168
File: frontend/javascripts/oxalis/model/reducers/skeletontracing_reducer_helpers.ts:125-125
Timestamp: 2024-11-22T17:17:39.914Z
Learning: In the createNode
function within skeletontracing_reducer_helpers.ts
, the property resolution
is intentionally kept even when the parameter is renamed to mag
.
webknossos-datastore/app/com/scalableminds/webknossos/datastore/controllers/ZarrStreamingController.scala (1)
Learnt from: frcroth
PR: #8609
File: app/models/dataset/Dataset.scala:753-775
Timestamp: 2025-05-12T13:07:29.637Z
Learning: In the updateMags
method of DatasetMagsDAO (Scala), the code handles different dataset types distinctly:
- Non-WKW datasets have
magsOpt
populated and use the first branch which includes axisOrder, channelIndex, and credentialId. - WKW datasets will have
wkwResolutionsOpt
populated and use the second branch which includes cubeLength. - The final branch is a fallback for legacy data.
This ensures appropriate fields are populated for each dataset type.
webknossos-datastore/app/com/scalableminds/webknossos/datastore/controllers/DataSourceController.scala (3)
Learnt from: frcroth
PR: #8609
File: app/models/dataset/Dataset.scala:753-775
Timestamp: 2025-05-12T13:07:29.637Z
Learning: In the updateMags
method of DatasetMagsDAO (Scala), the code handles different dataset types distinctly:
- Non-WKW datasets have
magsOpt
populated and use the first branch which includes axisOrder, channelIndex, and credentialId. - WKW datasets will have
wkwResolutionsOpt
populated and use the second branch which includes cubeLength. - The final branch is a fallback for legacy data.
This ensures appropriate fields are populated for each dataset type.
Learnt from: MichaelBuessemeyer
PR: #8352
File: app/models/organization/CreditTransactionService.scala:0-0
Timestamp: 2025-01-27T12:06:42.865Z
Learning: In Scala's for-comprehension with Fox (Future-like type), the <-
operator ensures sequential execution. If any step fails, the entire chain short-circuits and returns early, preventing subsequent operations from executing. This makes it safe to perform validation checks before database operations.
Learnt from: frcroth
PR: #8236
File: webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/mesh/MeshFileService.scala:170-173
Timestamp: 2025-04-23T08:51:57.756Z
Learning: In the webknossos codebase, classes extending FoxImplicits
have access to an implicit conversion from Option[A]
to Fox[A]
, where None
is converted to an empty Fox that fails gracefully in for-comprehensions.
🧬 Code Graph Analysis (1)
webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/TSRemoteDatastoreClient.scala (4)
util/src/main/scala/com/scalableminds/util/cache/AlfuCache.scala (3)
AlfuCache
(15-42)AlfuCache
(44-78)getOrLoad
(17-27)webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/TSRemoteWebknossosClient.scala (2)
getDatasetIdForAnnotation
(77-88)getDataStoreUriForDataset
(71-77)util/src/main/scala/com/scalableminds/util/tools/Fox.scala (3)
s
(229-233)s
(233-243)toFox
(14-14)webknossos-datastore/app/com/scalableminds/webknossos/datastore/rpc/RPCRequest.scala (2)
withTokenFromContext
(33-36)getWithJsonResponse
(100-103)
🔇 Additional comments (17)
webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/TSRemoteDatastoreClient.scala (1)
39-161
: Clean refactoring to dataset ID-based identification.The migration from
(organizationId, datasetDirectoryName)
tuples toObjectId
for dataset identification is well-executed. The cache keys and method signatures are consistently updated throughout the class.webknossos-datastore/conf/datastore.versioned.routes (1)
3-214
: Excellent implementation of versioned routes.The routing configuration properly addresses all previous feedback:
- New v10 routes for the latest API version
- Explicit route declarations for v5-v9 instead of forwarding
- Consistent use of
LegacyController
methods withV9
suffix- Comprehensive coverage of all endpoint types including Zarr2 and Zarr3
webknossos-datastore/app/com/scalableminds/webknossos/datastore/dataformats/layers/WKWDataLayers.scala (1)
26-152
: Well-implemented migration to unified magnification handling.The refactoring from
wkwResolutions
tomags
is excellently handled:
- Backward compatibility is maintained through custom JSON formats that attempt to read
mags
first, then fallback towkwResolutions
- The fixed
lengthOfUnderlyingCubes
value (1024) is appropriately retained as noted in previous discussions- Clear error messages when neither field is present
- Clean trait hierarchy with
DataLayerWithMagLocators
webknossos-datastore/app/com/scalableminds/webknossos/datastore/controllers/LegacyController.scala (3)
95-96
: Access control validation is consistent with legacy patterns.The access control correctly uses
DataSourceId(datasetDirectoryName, organizationId)
for legacy compatibility, which aligns with the v9 API expectations.
100-103
: Data retrieval pattern is consistent throughout the controller.The pattern of using
dataSourceRepository.getDataSourceAndDataLayer()
with organization and directory parameters is consistently applied across all legacy endpoints, maintaining backward compatibility.
374-374
: Service delegation reduces code duplication for Zarr operations.Good use of
zarrStreamingService.getHeader()
to delegate Zarr-specific logic rather than duplicating it in this controller. This pattern is consistently applied throughout the Zarr routes.webknossos-datastore/app/com/scalableminds/webknossos/datastore/controllers/ZarrStreamingController.scala (5)
44-52
: Dataset ID transition is correctly implemented.The method signature correctly uses
datasetId: ObjectId
instead of separate organization and directory parameters. Access control properly usesUserAccessRequest.readDataset(datasetId)
and data retrieval uses the unifieddatasetCache.getWithLayer()
pattern.
79-85
: Private link annotation handling updated correctly.The annotation private link functionality correctly adapts to use
annotationSource.datasetId
with the new cache pattern while maintaining the same error handling approach.
208-212
: Service delegation pattern is well implemented.The refactor properly delegates Zarr cube operations to
zarrStreamingService.rawZarrCube()
while maintaining proper access control and error handling.
328-328
: Display path updated for dataset ID consistency.The display path correctly uses
datasetId
instead of the old organization/directory format, maintaining consistency with the new identification scheme.
186-186
: Access control is enforced for annotation dataset accessThe
datasetCache.getById(annotationSource.datasetId)
call lives inside the same
accessTokenService.validateAccessFromTokenContext(UserAccessRequest.readDataset(datasetId))
block as the non-annotation endpoint. Since alldatasetCache.getById
invocations here are wrapped by thatvalidateAccessFromTokenContext
guard, there is no bypass of access controls in the annotation flow.webknossos-datastore/app/com/scalableminds/webknossos/datastore/controllers/DataSourceController.scala (6)
69-75
: Method signature and access control correctly updated.The transition from separate organization/directory parameters to
datasetId: ObjectId
is properly implemented with correct access control usingUserAccessRequest.readDataset(datasetId)
.
251-257
: Mapping operations retain DataSourceId dependency.The code correctly notes that mappings are still accessed by DataSourceId rather than datasetId. This is a reasonable intermediate state during the transition but should be tracked for future migration.
351-365
: Update method properly handles disk vs remote datasets.The logic correctly distinguishes between datasets that exist on disk vs virtual datasets, updating appropriately via disk operations or remote client calls. The data source ID preservation ensures consistency.
369-379
: Add method correctly implements dataset registration.The method properly constructs a DataSourceId for compatibility and delegates to the remote client for dataset registration. The response correctly returns the new dataset ID.
441-461
: Delete operation handles both disk and virtual datasets.The method correctly handles both scenarios: deleting datasets that exist on disk and virtual datasets. The conditional logic properly removes from both disk and database for disk-based datasets, or delegates to remote client for virtual datasets.
430-439
: Reload method uses proper cache invalidation pattern.The reload method correctly clears caches, then calls refreshDataSource to get the updated version. This ensures proper cache consistency.
private def refreshDataSource(datasetId: ObjectId)(implicit tc: TokenContext): Fox[DataSource] = | ||
for { | ||
dataSourceInDB <- datasetCache.getById(datasetId) ~> NOT_FOUND | ||
dataSourceId = dataSourceInDB.id | ||
dataSourceFromDir <- Fox.runIf( | ||
dataSourceService.existsOnDisk(dataSourceId.organizationId, dataSourceId.directoryName)) { | ||
dataSourceService | ||
.dataSourceFromDir( | ||
dataSourceService.dataBaseDir.resolve(dataSourceId.organizationId).resolve(dataSourceId.directoryName), | ||
dataSourceId.organizationId) | ||
.toUsable | ||
.toFox | ||
} | ||
_ <- dataSourceFromDir match { | ||
case Some(ds) => dsRemoteWebknossosClient.updateDataSource(ds, datasetId) | ||
case _ => Fox.successful(()) | ||
} | ||
dataSource <- datasetCache.getById(datasetId) ~> NOT_FOUND | ||
} yield dataSource | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
RefreshDataSource method needs careful review.
This critical method handles cache and disk synchronization, but the logic seems complex and potentially racy.
The method:
- Gets dataSource from cache
- Conditionally reads from disk if it exists
- Updates remote WK if disk version exists
- Gets dataSource from cache again
This could have race conditions if the cache is invalidated between steps or if multiple concurrent calls occur.
🏁 Script executed:
#!/bin/bash
# Check for other similar cache refresh patterns in the codebase
echo "=== Looking for similar cache refresh patterns ==="
rg -A 10 -B 5 "getById.*datasetId.*NOT_FOUND"
echo "=== Check for cache invalidation patterns ==="
rg -A 5 -B 5 "invalidateCache"
echo "=== Look for concurrent access patterns around refreshDataSource ==="
rg -A 5 -B 5 "refreshDataSource"
Length of output: 26279
Prevent stale cache hit in refreshDataSource
In DataSourceController.scala (lines 692–711), we call datasetCache.getById
both before and after updating the remote data source—but never invalidate the cache—so the final returned DataSource
will still reflect the old cached state.
Recommendations:
- Immediately after
insert
case Some(ds) => dsRemoteWebknossosClient.updateDataSource(ds, datasetId)
_ = datasetCache.invalidateCache(datasetId)
- This ensures the subsequent
fetches the fresh, updated record.
dataSource <- datasetCache.getById(datasetId) ~> NOT_FOUND
Alternatively, if a disk‐loaded ds
is present, consider returning that directly instead of re‐reading the cache.
🤖 Prompt for AI Agents
In
webknossos-datastore/app/com/scalableminds/webknossos/datastore/controllers/DataSourceController.scala
around lines 692 to 711, the method refreshDataSource calls datasetCache.getById
before and after updating the remote data source but does not invalidate the
cache, causing the final returned DataSource to be stale. To fix this,
immediately after the line where dsRemoteWebknossosClient.updateDataSource is
called inside the case Some(ds) block, add a call to
datasetCache.invalidateCache(datasetId) to clear the cache. This ensures that
the subsequent datasetCache.getById fetches the updated DataSource.
Alternatively, you could return the disk-loaded ds directly if present instead
of re-reading the cache.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Right, I forgot the invalidate call
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@frcroth Thanks for confirming! Yes, adding the cache invalidation call right after the updateDataSource
call will ensure the subsequent cache read gets the fresh data:
_ <- dataSourceFromDir match {
case Some(ds) =>
for {
_ <- dsRemoteWebknossosClient.updateDataSource(ds, datasetId)
_ = datasetCache.invalidateCache(datasetId)
} yield ()
case _ => Fox.successful(())
}
This will prevent the stale cache issue and ensure refreshDataSource
returns the updated DataSource
.
/| /|
( :v: )
|(_)|
^ ^
🧠 Learnings used
Learnt from: frcroth
PR: scalableminds/webknossos#8609
File: app/models/dataset/Dataset.scala:753-775
Timestamp: 2025-05-12T13:07:29.637Z
Learning: In the `updateMags` method of DatasetMagsDAO (Scala), the code handles different dataset types distinctly:
1. Non-WKW datasets have `magsOpt` populated and use the first branch which includes axisOrder, channelIndex, and credentialId.
2. WKW datasets will have `wkwResolutionsOpt` populated and use the second branch which includes cubeLength.
3. The final branch is a fallback for legacy data.
This ensures appropriate fields are populated for each dataset type.
If you found this review helpful, would you consider giving us a shout-out on X?
Thank you for using CodeRabbit!
URL of deployed dev instance (used for testing):
Steps to test:
Implementation notes
DataSource Id is still used internally in the datastore for various caches, also in the binary data handling.
Handling of real disk data sources is still done via orgid and datasetDirectoryName (e.g., uploading, storage size)
Everything else should use dataset ids.
TODOs:
Issues:
Followups:
(Please delete unneeded items, merge only when none are left open)
$PR_NUMBER.md
file inunreleased_changes
or use./tools/create-changelog-entry.py
)